Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…
Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias
Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.
Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.
Jacques Melitz; Farid Toubal
We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role...
Scott, C M; Windsor, J
Language performance in naturalistic contexts can be characterized by general measures of productivity, fluency, lexical diversity, and grammatical complexity and accuracy. The use of such measures as indices of language impairment in older children is open to questions of method and interpretation. This study evaluated the extent to which 10 general language performance measures (GLPM) differentiated school-age children with language learning disabilities (LLD) from chronological-age (CA) and language-age (LA) peers. Children produced both spoken and written summaries of two educational videotapes that provided models of either narrative or expository (informational) discourse. Productivity measures, including total T-units, total words, and words per minute, were significantly lower for children with LLD than for CA children. Fluency (percent T-units with mazes) and lexical diversity (number of different words) measures were similar for all children. Grammatical complexity as measured by words per T-unit was significantly lower for LLD children. However, there was no difference among groups for clauses per T-unit. The only measure that distinguished children with LLD from both CA and LA peers was the extent of grammatical error. Effects of discourse genre and modality were consistent across groups. Compared to narratives, expository summaries were shorter, less fluent (spoken versions), more complex (words per T-unit), and more error prone. Written summaries were shorter and had more errors than spoken versions. For many LLD and LA children, expository writing was exceedingly difficult. Implications for accounts of language impairment in older children are discussed.
Spoken language corpora for the nine official African languages of South Africa. Jens Allwood, AP Hendrikse. Abstract. In this paper we give an outline of a corpus planning project which aims to develop linguistic resources for the nine official African languages of South Africa in the form of corpora, more specifically spoken ...
Crowe, Kathryn; McLeod, Sharynne
The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…
Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.
Westerveld, Marleen F; Gillon, Gail T
This investigation explored the effects of oral narrative elicitation context on children's spoken language performance. Oral narratives were produced by a group of 11 children with reading disability (aged between 7;11 and 9;3) and an age-matched control group of 11 children with typical reading skills in three different contexts: story retelling, story generation, and personal narratives. In the story retelling condition, the children listened to a story on tape while looking at the pictures in a book, before being asked to retell the story without the pictures. In the story generation context, the children were shown a picture containing a scene and were asked to make up their own story. Personal narratives were elicited with the help of photos and short narrative prompts. The transcripts were analysed at microstructure level on measures of verbal productivity, semantic diversity, and morphosyntax. Consistent with previous research, the results revealed no significant interactions between group and context, indicating that the two groups of children responded to the type of elicitation context in a similar way. There was a significant group effect, however, with the typical readers showing better performance overall on measures of morphosyntax and semantic diversity. There was also a significant effect of elicitation context with both groups of children producing the longest, linguistically most dense language samples in the story retelling context. Finally, the most significant differences in group performance were observed in the story retelling condition, with the typical readers outperforming the poor readers on measures of verbal productivity, number of different words, and percent complex sentences. The results from this study confirm that oral narrative samples can distinguish between good and poor readers and that the story retelling condition may be a particularly useful context for identifying strengths and weaknesses in oral narrative performance.
Nicodemus, Brenda; Emmorey, Karen
Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…
This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…
Assessing spoken-language educational interpreting: Measuring up and measuring right. Lenelle Foster, Adriaan Cupido. Abstract. This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at ...
The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…
Apr 12, 2018 ... sound of that language. These language-specific properties can be exploited to identify a spoken language reliably. Automatic language identification has emerged as a prominent research area in. Indian languages processing. People from different regions of India speak around 800 different languages.
Parisse , Christophe; Le Normand , Marie-Thérèse
International audience; The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automa...
This paper sets out to examine the phonological interference in the spoken English performance of the Izon speaker. It emphasizes that the level of interference is not just as a result of the systemic differences that exist between both language systems (Izon and English) but also as a result of the interlanguage factors such ...
Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda
.... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...
The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.
Full Text Available This article introduces the first Spoken Language Identification system developed to distinguish among all eleven of South Africa’s official languages. The PPR-LM (Parallel Phoneme Recognition followed by Language Modeling) architecture...
assessment instrument used to assess formally the spoken-language educational interpreters at. Stellenbosch University (SU). Research ..... Is the interpreter suited to the module? Is the interpreter easier to follow? Technical. Microphone technique. Lag. Completeness. Language use. Vocabulary. Role. Personal Objectives ...
This article examines the impact of the hegemony of English, as a common lingua franca, referred to as a global language, on the indigenous languages spoken in Nigeria. Since English, through the British political imperialism and because of the economic supremacy of English dominated countries, has assumed the ...
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently
Alt, Mary; Gutmann, Michelle L
This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.
Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric
WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois
The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.
Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...
Parisse, C; Le Normand, M T
The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automatic disambiguation of lexical tags. The tool presented here (POST) can tag and disambiguate a large text in a few seconds. This tool complements systems dealing with language transcription and suggests further theoretical developments in the assessment of the status of morphosyntax in spoken language corpora. The program currently works for French and English, but it can be easily adapted for use with other languages. The analysis and computation of a corpus produced by normal French children 2-4 years of age, as well as of a sample corpus produced by French SLI children, are given as examples.
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Locke, John L.
A major synthesis of the latest research on early language acquisition, this book explores what gives infants the remarkable capacity to progress from babbling to meaningful sentences, and what inclines a child to speak. The book examines the neurological, perceptual, social, and linguistic aspects of language acquisition in young children, from…
Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.
Full Text Available rates when no Japanese acoustic models are constructed. An increasing amount of Japanese training data is used to train the language classifier of an English-only (E), an English-French (EF), and an English-French-Portuguese PPR system. ple.... Experimental design 3.1. Corpora Because of their role as world languages that are widely spoken in Africa, our initial LID system was designed to distinguish between English, French and Portuguese. We therefore trained phone recognizers and language...
Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann
Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…
Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah
Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken En...
Remington, Robert J.
Leaders within the Information Technology (IT) industry are expressing a general concern that the products used to deliver and manage today's communications network capabilities require far too much effort to learn and to use, even by highly skilled and increasingly scarce support personnel. The usability of network management systems must be significantly improved if they are to deliver the performance and quality of service needed to meet the ever-increasing demand for new Internet-based information and services. Fortunately, recent advances in spoken language (SL) interface technologies show promise for significantly improving the usability of most interactive IT applications, including network management systems. The emerging SL interfaces will allow users to communicate with IT applications through words and phases -- our most familiar form of everyday communication. Recent advancements in SL technologies have resulted in new commercial products that are being operationally deployed at an increasing rate. The present paper describes a project aimed at the application of new SL interface technology for improving the usability of an advanced network management system. It describes several SL interface features that are being incorporated within an existing system with a modern graphical user interface (GUI), including 3-D visualization of network topology and network performance data. The rationale for using these SL interface features to augment existing user interfaces is presented, along with selected task scenarios to provide insight into how a SL interface will simplify the operator's task and enhance overall system usability.
Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin
Moeller, Aleidine J.; Theiler, Janine
Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…
Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre
To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.
Inegbeboh, Bridget O.
Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…
Office of English Language Acquisition, US Department of Education, 2015
The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…
Nippold, Marilyn A; Frantz-Kaspar, Megan W; Vigeland, Laura M
In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment. Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density. Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task. Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.
Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M
The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.
Barberà, Gemma; Zwets, Martine
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
Eisenberg, Laurie S; Fisher, Laurel M; Johnson, Karen C; Ganguly, Dianne Hammes; Grace, Thelma; Niparko, John K
We investigated associations between sentence recognition and spoken language for children with cochlear implants (CI) enrolled in the Childhood Development after Cochlear Implantation (CDaCI) study. In a prospective longitudinal study, sentence recognition percent-correct scores and language standard scores were correlated at 48-, 60-, and 72-months post-CI activation. Six tertiary CI centers in the United States. Children with CIs participating in the CDaCI study. Cochlear implantation. Sentence recognition was assessed using the Hearing In Noise Test for Children (HINT-C) in quiet and at +10, +5, and 0 dB signal-to-noise ratio (S/N). Spoken language was assessed using the Clinical Assessment of Spoken Language (CASL) core composite and the antonyms, paragraph comprehension (syntax comprehension), syntax construction (expression), and pragmatic judgment tests. Positive linear relationships were found between CASL scores and HINT-C sentence scores when the sentences were delivered in quiet and at +10 and +5 dB S/N, but not at 0 dB S/N. At 48 months post-CI, sentence scores at +10 and +5 dB S/N were most strongly associated with CASL antonyms. At 60 and 72 months, sentence recognition in noise was most strongly associated with paragraph comprehension and syntax construction. Children with CIs learn spoken language in a variety of acoustic environments. Despite the observed inconsistent performance in different listening situations and noise-challenged environments, many children with CIs are able to build lexicons and learn the rules of grammar that enable recognition of sentences.
Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…
Wang, Zhen; Zechner, Klaus; Sun, Yu
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…
Thothathiri, Malathi; Snedeker, Jesse
Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven more elusive, fueling claims that comprehension is less dependent on general syntactic representations and more dependent on lexical knowledge. In three experiments we explored syntactic priming during spoken language comprehension. Participants acted out double-object (DO) or prepositional-object (PO) dative sentences while their eye movements were recorded. Prime sentences used different verbs and nouns than the target sentences. In target sentences, the onset of the direct-object noun was consistent with both an animate recipient and an inanimate theme, creating a temporary ambiguity in the argument structure of the verb (DO e.g., Show the horse the book; PO e.g., Show the horn to the dog). We measured the difference in looks to the potential recipient and the potential theme during the ambiguous interval. In all experiments, participants who heard DO primes showed a greater preference for the recipient over the theme than those who heard PO primes, demonstrating across-verb priming during online language comprehension. These results accord with priming found in production studies, indicating a role for abstract structural information during comprehension as well as production.
Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae
Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.
BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.
Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid
Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Shaw, Emily P.
This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…
Huettig, Falk; Brouwer, Susanne
It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.
Hampton, L. H.; Kaiser, A. P.
Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…
Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.
Full Text Available The research and development of the Slovak spoken language dialogue system (SLDS is described in the paper. The dialogue system is based on the DARPA Communicator architecture and was developed in the period from July 2003 to June 2006. It consists of the Galaxy hub and telephony, automatic speech recognition, text-to-speech, backend, transport and VoiceXML dialogue management and automatic evaluation modules. The dialogue system is demonstrated and tested via two pilot applications, „Weather Forecast“ and „Public Transport Timetables“. The required information is retrieved from Internet resources in multi-user mode through PSTN, ISDN, GSM and/or VoIP network. Some innovation development has been performed since 2006 which is also described in the paper.
Doshi, Finale; Roy, Nicholas
Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.
Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa
Nicholas, Johanna G.; Geers, Ann E.
Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…
Pisoni, David B.
This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…
Carrero Pérez, Nubia Patricia
Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…
Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José
Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…
Jul 1, 2009 ... correct language that has been acquired through listening. The Brewsters17 suggest an 'immersion experience' by living with speakers of the language. Ellis included several of their tools, such as loop tapes, as being useful in a consultation when learning a language.15 Others disagree with a purely.
Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna
Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in each group achieved benchmarks for the first stage of functional spoken language development, as defined by Tager-Flusberg et al. (J Speech Lang Hear Res, 52: 643-652, 2009). Analyses of moderators of treatment suggest that joint attention moderates response to both treatments, and children with better receptive language pre-treatment do better with the naturalistic method, while those with lower receptive language show better response to the discrete trial treatment. The implications of these findings are discussed.
Vaughn, Charlotte R; Bradlow, Ann R
While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3). Results demonstrate that language-being-spoken is integrated in processing with each of the other dimensions tested, and that these processing dependencies seem to be independent of listeners' bilingual status or experience with the languages tested. Moreover, the data reveal processing interference asymmetries, suggesting a processing hierarchy for indexical, non-linguistic speech features.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. In Experiment 1, 69 children with TLD (7-10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7-12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity
Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters
Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to
Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language
João Mendonça Correia
Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in
Wilang, Jeffrey Dawala; Sinwongsuwat, Kemtong
This year is designated as Thailand's "English Speaking Year" with the aim of improving the communicative competence of Thais for the upcoming integration of the Association of Southeast Asian Nations (ASEAN) in 2015. The consistent low-level proficiency of the Thais in the English language has led to numerous curriculum revisions and…
le Fevre Jakobsen, Bjarne
with well-edited material, in 1965, to an anchor who hands over to journalists in live feeds from all over the world via satellite, Skype, or mobile telephone, in 2011. The narrative rhythm is faster and sometimes more spontaneous. In this article we will discuss aspects of the use of language and the tempo...
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Maldonado Torres, Sonia Enid
The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…
Paladino, Jonathan D; Crooke, Philip S; Brackney, Christopher R; Kaynar, A Murat; Hotchkiss, John R
Medical care commonly involves the apprehension of complex patterns of patient derangements to which the practitioner responds with patterns of interventions, as opposed to single therapeutic maneuvers. This complexity renders the objective assessment of practice patterns using conventional statistical approaches difficult. Combinatorial approaches drawn from symbolic dynamics are used to encode the observed patterns of patient derangement and associated practitioner response patterns as sequences of symbols. Concatenating each patient derangement symbol with the contemporaneous practitioner response symbol creates "words" encoding the simultaneous patient derangement and provider response patterns and yields an observed vocabulary with quantifiable statistical characteristics. A fundamental observation in many natural languages is the existence of a power law relationship between the rank order of word usage and the absolute frequency with which particular words are uttered. We show that population level patterns of patient derangement: practitioner intervention word usage in two entirely unrelated domains of medical care display power law relationships similar to those of natural languages, and that-in one of these domains-power law behavior at the population level reflects power law behavior at the level of individual practitioners. Our results suggest that patterns of medical care can be approached using quantitative linguistic techniques, a finding that has implications for the assessment of expertise, machine learning identification of optimal practices, and construction of bedside decision support tools.
Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B
Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures
Williams, Joshua T.; Newman, Sharlene D.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Full Text Available Language technologies, in particular machine translation applications, have the potential to help break down linguistic and cultural barriers, presenting an important contribution to the globalization and internationalization of the Portuguese language, by allowing content to be shared 'from' and 'to' this language. This article aims to present the research work developed at the Laboratory of Spoken Language Systems of INESC-ID in the field of machine translation, namely the automated speech translation, the translation of microblogs and the creation of a hybrid machine translation system. We will focus on the creation of the hybrid system, which aims at combining linguistic knowledge, in particular semantico-syntactic knowledge, with statistical knowledge, to increase the level of translation quality.
Full Text Available Speech Recognition (ASR) systems in the developing world is severely inhibited. Given that few task-specific corpora exist and speech technology systems perform poorly when deployed in a new environment, we investigate the use of acoustic model adaptation...
Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J
This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.
Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R
Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.
Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783
Hirschmüller, Sarah; Egloff, Boris
How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Full Text Available How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a positive emotion word usage base rates in spoken and written materials and (b positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G
This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Brennan-Jones, Christopher G; White, Jo; Rush, Robert W; Law, James
Congenital or early-acquired hearing impairment poses a major barrier to the development of spoken language and communication. Early detection and effective (re)habilitative interventions are essential for parents and families who wish their children to achieve age-appropriate spoken language. Auditory-verbal therapy (AVT) is a (re)habilitative approach aimed at children with hearing impairments. AVT comprises intensive early intervention therapy sessions with a focus on audition, technological management and involvement of the child's caregivers in therapy sessions; it is typically the only therapy approach used to specifically promote avoidance or exclusion of non-auditory facial communication. The primary goal of AVT is to achieve age-appropriate spoken language and for this to be used as the primary or sole method of communication. AVT programmes are expanding throughout the world; however, little evidence can be found on the effectiveness of the intervention. To assess the effectiveness of auditory-verbal therapy (AVT) in developing receptive and expressive spoken language in children who are hearing impaired. CENTRAL, MEDLINE, EMBASE, PsycINFO, CINAHL, speechBITE and eight other databases were searched in March 2013. We also searched two trials registers and three theses repositories, checked reference lists and contacted study authors to identify additional studies. The review considered prospective randomised controlled trials (RCTs) and quasi-randomised studies of children (birth to 18 years) with a significant (≥ 40 dBHL) permanent (congenital or early-acquired) hearing impairment, undergoing a programme of auditory-verbal therapy, administered by a certified auditory-verbal therapist for a period of at least six months. Comparison groups considered for inclusion were waiting list and treatment as usual controls. Two review authors independently assessed titles and abstracts identified from the searches and obtained full-text versions of all potentially
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214
Feghali, Maksoud N.
This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…
Gautreau, Aurore; Hoen, Michel; Meunier, Fanny
This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.
Ramos-Sanchez, Jose Luis; Cuadrado-Gordillo, Isabel
This article presents the results of a quasi-experimental study of whether there exists a causal relationship between spoken language and the initial learning of reading/writing. The subjects were two matched samples each of 24 preschool pupils (boys and girls), controlling for certain relevant external variables. It was found that there was no…
Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie
Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…
Peters, Sara A; Boiteau, Timothy W; Almor, Amit
The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis.
Rubin, H; Kantor, M; Macnab, J
Experiments examined grammatical judgement, and error-identification deficits in relation to expressive language skills and to morphemic errors in writing. Language-disabled subjects did not differ from language-matched controls on judgement, revision, or error identification. Age-matched controls represented more morphemes in elicited writing than either of the other groups, which were equivalent. However, in spontaneous writing, language-disabled subjects made more frequent morphemic errors than age-matched controls, but language-matched subjects did not differ from either group. Proficiency relative to academic experience and oral language status and to remedial implications are discussed.
Choroomi, S; Curotta, J
To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.
Courtin, Cyril; Jobard, Gael; Vigneau, Mathieu; Beaucousin, Virginie; Razafimandimby, Annick; Hervé, Pierre-Yves; Mellet, Emmanuel; Zago, Laure; Petit, Laurent; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie
We used functional magnetic resonance imaging to investigate the areas activated by signed narratives in non-signing subjects naïve to sign language (SL) and compared it to the activation obtained when hearing speech in their mother tongue. A subset of left hemisphere (LH) language areas activated when participants watched an audio-visual narrative in their mother tongue was activated when they observed a signed narrative. The inferior frontal (IFG) and precentral (Prec) gyri, the posterior parts of the planum temporale (pPT) and of the superior temporal sulcus (pSTS), and the occipito-temporal junction (OTJ) were activated by both languages. The activity of these regions was not related to the presence of communicative intent because no such changes were observed when the non-signers watched a muted video of a spoken narrative. Recruitment was also not triggered by the linguistic structure of SL, because the areas, except pPT, were not activated when subjects listened to an unknown spoken language. The comparison of brain reactivity for spoken and sign languages shows that SL has a special status in the brain compared to speech; in contrast to unknown oral language, the neural correlates of SL overlap LH speech comprehension areas in non-signers. These results support the idea that strong relationships exist between areas involved in human action observation and language, suggesting that the observation of hand gestures have shaped the lexico-semantic language areas as proposed by the motor theory of speech. As a whole, the present results support the theory of a gestural origin of language. Copyright © 2010 Elsevier Inc. All rights reserved.
Werfel, Krystal L
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.
Werfel, Krystal L.
Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…
Auditory-Verbal Therapy (AVT) is an effective early intervention for children with hearing loss. The Hear and Say Centre in Brisbane offers AVT sessions to families soon after diagnosis, and about 20% of the families in Queensland participate via PC-based videoconferencing (Skype). Parent and therapist satisfaction with the telemedicine sessions was examined by questionnaire. All families had been enrolled in the telemedicine AVT programme for at least six months. Their average distance from the Hear and Say Centre was 600 km. Questionnaires were completed by 13 of the 17 parents and all five therapists. Parents and therapists generally expressed high satisfaction in the majority of the sections of the questionnaire, e.g. most rated the audio and video quality as good or excellent. All parents felt comfortable or as comfortable as face-to-face when discussing matters with the therapist online, and were satisfied or as satisfied as face-to-face with their level and their child's level of interaction/rapport with the therapist. All therapists were satisfied or very satisfied with the telemedicine AVT programme. The results demonstrate the potential of telemedicine service delivery for teaching listening and spoken language to children with hearing loss in rural and remote areas of Australia.
Kasyidi, Fatan; Puji Lestari, Dessi
One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.
Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners
Petkov, Christopher I; Jarvis, Erich D
Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.
Petkov, Christopher I.; Jarvis, Erich D.
Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615
Leni Amalia Suek
Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.
Marchman, Virginia A; Fernald, Anne; Hurtado, Nereyda
Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.
Adank, P.M.; Noordzij, M.L.; Hagoort, P.
A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a
The comparatively small vowel inventory of Bantu languages leads young Bantu learners to produce "undifferentiations," so that, for example, the spoken forms of "hat,""hut,""heart" and "hurt" sound the same to a British ear. The two criteria for a non-native speaker's spoken performance are…
Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J
In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.
McDuffie, Andrea; Banasik, Amy; Bullard, Lauren; Nelson, Sarah; Feigles, Robyn Tempero; Hagerman, Randi; Abbeduto, Leonard
A small randomized group design (N = 20) was used to examine a parent-implemented intervention designed to improve the spoken language skills of school-aged and adolescent boys with FXS, the leading cause of inherited intellectual disability. The intervention was implemented by speech-language pathologists who used distance video-teleconferencing to deliver the intervention. The intervention taught mothers to use a set of language facilitation strategies while interacting with their children in the context of shared story-telling. Treatment group mothers significantly improved their use of the targeted intervention strategies. Children in the treatment group increased the duration of engagement in the shared story-telling activity as well as use of utterances that maintained the topic of the story. Children also showed increases in lexical diversity, but not in grammatical complexity.
KRISHNAMURTHI, M.G.; MCCORMACK, WILLIAM
THE TWENTY GRADED UNITS IN THIS TEXT CONSTITUTE AN INTRODUCTION TO BOTH INFORMAL AND FORMAL SPOKEN KANNADA. THE FIRST TWO UNITS PRESENT THE KANNADA MATERIAL IN PHONETIC TRANSCRIPTION ONLY, WITH KANNADA SCRIPT GRADUALLY INTRODUCED FROM UNIT III ON. A TYPICAL LESSON-UNIT INCLUDES--(1) A DIALOG IN PHONETIC TRANSCRIPTION AND ENGLISH TRANSLATION, (2)…
Adank, P.M.; Noordzij, M.L.; Hagoort, P.
A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and
This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…
Blumenfeld, Henrike K.; Marian, Viorica
Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842
Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.
Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony
Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.
Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene
Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.
Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.
Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides
Geytenbeek, J.J.M.; Vermeulen, R.J.; Becher, J.G.; Oostrom, K.J.
Aim: To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Method: Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic
De Angelis, Gessica
The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…
Li, Xiao-qing; Ren, Gui-qin
An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…
Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.
Schaefer, Blanca; Stackhouse, Joy; Wells, Bill
There is strong empirical evidence that English-speaking children with spoken language difficulties (SLD) often have phonological awareness (PA) deficits. The aim of this study was to explore longitudinally if this is also true of pre-school children speaking German, a language that makes extensive use of derivational morphemes which may impact on the acquisition of different PA levels. Thirty 4-year-old children with SLD were assessed on 11 PA subtests at three points over a 12-month period and compared with 97 four-year-old typically developing (TD) children. The TD-group had a mean percentage correct of over 50% for the majority of tasks (including phoneme tasks) and their PA skills developed significantly over time. In contrast, the SLD-group improved their PA performance over time on syllable and rhyme, but not on phoneme level tasks. Group comparisons revealed that children with SLD had weaker PA skills, particularly on phoneme level tasks. The study contributes a longitudinal perspective on PA development before school entry. In line with their English-speaking peers, German-speaking children with SLD showed poorer PA skills than TD peers, indicating that the relationship between SLD and PA is similar across these two related but different languages.
Jansen, Stefanie; Wesselmeier, Hendrik; de Ruiter, Jan P; Mueller, Horst M
Even though research in turn-taking in spoken dialogues is now abundant, a typical EEG-signature associated with the anticipation of turn-ends has not yet been identified until now. The purpose of this study was to examine if readiness potentials (RP) can be used to study the anticipation of turn-ends by using it in a motoric finger movement and articulatory movement task. The goal was to determine the preconscious onset of turn-end anticipation in early, preconscious turn-end anticipation processes by the simultaneous registration of EEG measures (RP) and behavioural measures (anticipation timing accuracy, ATA). For our behavioural measures, we used both button-press and verbal response ("yes"). In the experiment, 30 subjects were asked to listen to auditorily presented utterances and press a button or utter a brief verbal response when they expected the end of the turn. During the task, a 32-channel-EEG signal was recorded. The results showed that the RPs during verbal- and button-press-responses developed similarly and had an almost identical time course: the RP signals started to develop 1170 vs. 1190 ms before the behavioural responses. Until now, turn-end anticipation is usually studied using behavioural methods, for instance by measuring the anticipation timing accuracy, which is a measurement that reflects conscious behavioural processes and is insensitive to preconscious anticipation processes. The similar time course of the recorded RP signals for both verbal- and button-press responses provide evidence for the validity of using RPs as an online marker for response preparation in turn-taking and spoken dialogue research. Copyright © 2014 Elsevier B.V. All rights reserved.
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from federal...
Colin, C; Zuinen, T; Bayard, C; Leybaert, J
Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Coplan, Robert J.; Weeks, Murray
The goal of this study was to examine the moderating role of pragmatic language in the relations between shyness and indices of socio-emotional adjustment in an unselected sample of early elementary school children. In particular, we sought to explore whether pragmatic language played a protective role for shy children. Participants were n = 167…
Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán
Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…
Methods. Qualitative individual interviews were conducted with seven doctors who had successfully learned the language of their patients, to determine their experiences and how they had succeeded. Results. All seven doctors used a combination of methods to learn the language. Listening was found to be very important, ...
This paper describes a study comparing chatroom and face-to-face oral interaction for the purposes of language learning in a tertiary classroom in the United Arab Emirates. It uses transcripts analysed for Language Related Episodes, collaborative dialogues, thought to be externally observable examples of noticing in action. The analysis is…
Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…
Liebenthal, Einat; Silbersweig, David A; Stern, Emily
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Harris, David; Bennet, Lisa; Bant, Sharyn
Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of
Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R
This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits from federal fiscal year...
Sindorela Doli Kryeziu; Gentiana Muhaxhiri
In this paper we have tried to clarify the problems that are faced "gege dialect's'' speakers in Gjakova who have presented more or less difficulties in acquiring the standard. Standard language is part of the people language, but increased to the norm according the scientific criteria. From this observation it comes obliviously understandable that standard variation and dialectal variant are inseparable and, as such, they represent a macro linguistic unity. As part of this macro linguistic u...
Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen
To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.
Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth
Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to
Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.
Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…
Koyalan, Aylin; Mumford, Simon
The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…
Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa
Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…
Toledo, Paloma; Eosakul, Stanley T; Grobman, William A; Feinglass, Joe; Hasnain-Wynia, Romana
Hispanic women are less likely than non-Hispanic Caucasian women to use neuraxial labor analgesia. It is unknown whether there is a disparity in anticipated or actual use of neuraxial labor analgesia among Hispanic women based on primary language (English versus Spanish). In this 3-year retrospective, single-institution, cross-sectional study, we extracted electronic medical record data on Hispanic nulliparous with vaginal deliveries who were insured by Medicaid. On admission, patients self-identified their primary language and anticipated analgesic use for labor. Extracted data included age, marital status, labor type, delivery provider (obstetrician or midwife), and anticipated and actual analgesic use. Household income was estimated from census data geocoded by zip code. Multivariable logistic regression models were estimated for anticipated and actual neuraxial analgesia use. Among 932 Hispanic women, 182 were self-identified as primary Spanish speakers. Spanish-speaking Hispanic women were less likely to anticipate and use neuraxial anesthesia than English-speaking women. After controlling for confounders, there was an association between primary language and anticipated neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women, 0.70; 97.5% confidence interval, 0.53-0.92). Similarly, there was an association between language and neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women 0.88; 97.5% confidence interval, 0.78-0.99). The use of a midwife compared with an obstetrician also decreased the likelihood of both anticipating and using neuraxial analgesia. A language-based disparity was found in neuraxial labor analgesia use. It is possible that there are communication barriers in knowledge or understanding of analgesic options. Further research is necessary to determine the cause of this association.
Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.
Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...
Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M
A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.
Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung
This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.
Schreibman, Laura; Stahmer, Aubyn C
Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.
Kobayashi, Yuichiro; Abe, Mariko
The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…
ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn
This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.
Moran, Catherine; Kirk, Cecilia; Powell, Emma
Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…
Full Text Available Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1 and second-language (L2. A total of 72 children (10 years old participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 sec crossed with two different SNRs (+3 dBA and +12 dBA. Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA improved recall compared to a low SNR (+3 dBA. Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time did not interact with language.
Brodie, Kara; Abel, Gary; Burt, Jenni
To investigate if language spoken at home mediates the relationship between ethnicity and doctor-patient communication for South Asian and White British patients. We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner-patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. There was strong evidence of an association between doctor-patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0-100) than White British patients (95% CI -4.9 to -1.1, p=0.002). This difference reduced to 1.4 points (95% CI -3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI -6.4 to -0.2). South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert
Background: Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and…
Sedgwick, Carole; Garner, Mark
Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical
Tobey, Emily A.; Thal, Donna; Niparko, John K.; Eisenberg, Laurie S.; Quittner, Alexandra L.; Wang, Nae-Yuh
Objective This study examined specific spoken language abilities of 160 children with severe-to-profound sensorineural hearing loss followed prospectively 4, 5, or 6 years after cochlear implantation. Study sample Ninety-eight children received implants before 2.5 years, and 62 children received implants between 2.5 and 5 years of age. Design Language was assessed using four subtests of the Comprehensive Assessment of Spoken Language (CASL). Standard scores were evaluated by contrasting age of implantation and follow-up test time. Results Children implanted under 2.5 years of age achieved higher standard scores than children with older ages of implantation for expressive vocabulary, expressive syntax, and pragmatic judgments. However, in both groups, some children performed more than two standard deviations below the standardization group mean, while some scored at or well above the mean. Conclusions Younger ages of implantation are associated with higher levels of performance, while later ages of implantation are associated with higher probabilities of continued language delays, particularly within subdomains of grammar and pragmatics. Longitudinal data from this cohort study demonstrate that after 6 years of implant experience, there is large variability in language outcomes associated with modifiers of rates of language learning that differ as children with implants age. PMID:23448124
This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…
Allen, Mark D; Owens, Tyler E
Allen [Allen, M. D. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264] presents evidence from a single patient, WBN, to motivate a theory of lexical processing and representation in which syntactic information may be encoded and retrieved independently of semantic information. In his critique, Kemmerer argues that because Allen depended entirely on preposition-based verb subcategory violations to test WBN's knowledge of correct argument structure, his results, at best, address a "strawman" theory. This argument rests on the assumption that preposition subcategory options are superficial syntactic phenomena which are not represented by argument structure proper. We demonstrate that preposition subcategory is in fact treated as semantically determined argument structure in the theories that Allen evaluated, and thus far from irrelevant. In further discussion of grammatically relevant versus irrelevant semantic features, Kemmerer offers a review of his own studies. However, due to an important design shortcoming in these experiments, we remain unconvinced. Reemphasizing the fact the Allen (2005) never claimed to rule out all semantic contributions to syntax, we propose an improvement in Kemmerer's approach that might provide more satisfactory evidence on the distinction between the kinds of relevant versus irrelevant features his studies have addressed.
Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hoshino, Takahiro; Hagiwara, Hiroko
Children's foreign-language (FL) learning is a matter of much social as well as scientific debate. Previous behavioral research indicates that starting language learning late in life can lead to problems in phonological processing. Inadequate phonological capacity may impede lexical learning and semantic processing (phonological bottleneck hypothesis). Using both behavioral and neuroimaging data, here we examine the effects of age of first exposure (AOFE) and total hours of exposure (HOE) to English, on 350 Japanese primary-school children's semantic processing of spoken English. Children's English proficiency scores and N400 event-related brain potentials (ERPs) were analyzed in multiple regression analyses. The results showed (1) that later, rather than earlier, AOFE led to higher English proficiency and larger N400 amplitudes, when HOE was controlled for; and (2) that longer HOE led to higher English proficiency and larger N400 amplitudes, whether AOFE was controlled for or not. These data highlight the important role of amount of exposure in FL learning, and cast doubt on the view that starting FL learning earlier always produces better results. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...
Full Text Available This paper examines the impact of digital media on the relationship between writing, performance, and textuality from the perspective of literate verbal artists in Mali. It considers why some highly educated verbal artists in urban Africa self-identify as writers despite the oralizing properties of new media, and despite the fact that their own works circulate entirely through performance. The motivating factors are identified as a desire to present themselves as composers rather than as performers of texts, and to differentiate their work from that of minimally educated performers of texts associated with traditional orality.
Mann, Collette; Canny, Ben; Lindley, Jennifer; Rajan, Ramesh
Generally, in most countries around the world, local medical students outperform, in an academic sense, international students. In an endeavour to understand if this effect is caused by language proficiency skills, we investigated academic differences between local and international MBBS students categorised by native language families. Data were available and obtained for medical students in their first and second years of study in 2002, 2003, 2005 and 2006. Information on social demographics, personal history and language(s) spoken at home was collected, as well as academic assessment results for each student. Statistical analysis was carried out with a dataset pertaining to a total of 872 students. Local students performed better than international students in first- (p language family and origin in the first year (p international students only, there was a main effect for language in the second year (p students from Sino-Tibetan language family backgrounds obtaining higher mean scores than students from English or Indo-European language family backgrounds. Our results confirmed that, overall, local students perform better academically than international students. However, given that language family differences exist, this may reflect acculturation rather than simply English language skills.
Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun
The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.
Houston, K. Todd
Since 1946, Utah State University (USU) has offered specialized coursework in audiology and speech-language pathology, awarding the first graduate degrees in 1948. In 1965, the teacher training program in deaf education was launched. Over the years, the Department of Communicative Disorders and Deaf Education (COMD-DE) has developed a rich history…
Chen, Pei-Hua; Liu, Ting-Wei
Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…
Nelson, Sarah; McDuffie, Andrea; Banasik, Amy; Tempero Feigles, Robyn; Thurman, Angela John; Abbeduto, Leonard
This study examined the impact of a distance-delivered parent-implemented narrative language intervention on the use of inferential language during shared storytelling by school-aged boys with fragile X syndrome, an inherited neurodevelopmental disorder. Nineteen school-aged boys with FXS and their biological mothers participated. Dyads were randomly assigned to an intervention or a treatment-as-usual comparison group. Transcripts from all pre- and post-intervention sessions were coded for child use of prompted and spontaneous inferential language coded into various categories. Children in the intervention group used more utterances that contained inferential language than the comparison group at post-intervention. Furthermore, children in the intervention group used more prompted inferential language than the comparison group at post-intervention, but there were no differences between the groups in their spontaneous use of inferential language. Additionally, children in the intervention group demonstrated increases from pre- to post-intervention in their use of most categories of inferential language. This study provides initial support for the utility of a parent-implemented language intervention for increasing the use of inferential language by school aged boys with FXS, but also suggests the need for additional treatment to encourage spontaneous use. Copyright © 2018 Elsevier Inc. All rights reserved.
Weisberg, Jill; McCullough, Stephen; Emmorey, Karen
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161
Šimáčková, Š.; Podlipský, V.J.; Chládková, K.
As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,
Popova, V.N.; Treur, J.
A specification language for performance indicators and their relations and requirements is presented and illustrated for a case study in logistics. The language can be used in different forms, varying from informal, semiformal, graphical to formal. A software environment has been developed that
Popova, Viara; Treur, Jan
A specification language for performance indicators and their relations and requirements is presented and illustrated for a case study in logistics. The language can be used in different forms, varying from informal, semiformal, graphical to formal. A software environment has been developed that
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
Ann M Weber
Full Text Available Language tests developed and validated in one country may lose their desired properties when translated for use in another, possibly resulting in misleading estimates of ability. Using Item Response Theory (IRT methodology, we assess the performance of a test of receptive vocabulary, the U.S.-validated Peabody Picture Vocabulary Test-Third Edition (PPVT-III, when translated, adapted, and administered to children 3 to 10 years of age in Madagascar (N = 1372, in the local language (Malagasy. Though Malagasy is considered a single language, there are numerous dialects spoken in Madagascar. Our findings were that test scores were positively correlated with age and indicators of socio-economic status. However, over half (57/96 of items evidenced unexpected response variation and/or bias by local dialect spoken. We also encountered measurement error and reduced differentiation among person abilities when we used the publishers' recommended stopping rules, largely because we lost the original item ordering by difficulty when we translated test items into Malagasy. Our results suggest that bias and testing inefficiency introduced from the translation of the PPVT can be significantly reduced with the use of methods based on IRT at both the pre-testing and analysis stages. We explore and discuss implications for cross-cultural comparisons of internationally recognized tests, such as the PPVT.
Weber, Ann M; Fernald, Lia C H; Galasso, Emanuela; Ratsifandrihamanana, Lisy
Language tests developed and validated in one country may lose their desired properties when translated for use in another, possibly resulting in misleading estimates of ability. Using Item Response Theory (IRT) methodology, we assess the performance of a test of receptive vocabulary, the U.S.-validated Peabody Picture Vocabulary Test-Third Edition (PPVT-III), when translated, adapted, and administered to children 3 to 10 years of age in Madagascar (N = 1372), in the local language (Malagasy). Though Malagasy is considered a single language, there are numerous dialects spoken in Madagascar. Our findings were that test scores were positively correlated with age and indicators of socio-economic status. However, over half (57/96) of items evidenced unexpected response variation and/or bias by local dialect spoken. We also encountered measurement error and reduced differentiation among person abilities when we used the publishers' recommended stopping rules, largely because we lost the original item ordering by difficulty when we translated test items into Malagasy. Our results suggest that bias and testing inefficiency introduced from the translation of the PPVT can be significantly reduced with the use of methods based on IRT at both the pre-testing and analysis stages. We explore and discuss implications for cross-cultural comparisons of internationally recognized tests, such as the PPVT.
LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI
Süleyman Kaan YALÇIN (M.A.H.
Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.
Oviatt, S; Bernard, J; Levow, G A
Fragile error handling in recognition-based systems is a major problem that degrades their performance, frustrates users, and limits commercial potential. The aim of the present research was to analyze the types and magnitude of linguistic adaptation that occur during spoken and multimodal human-computer error resolution. A semiautomatic simulation method with a novel error-generation capability was used to collect samples of users' spoken and pen-based input immediately before and after recognition errors, and at different spiral depths in terms of the number of repetitions needed to resolve an error. When correcting persistent recognition errors, results revealed that users adapt their speech and language in three qualitatively different ways. First, they increase linguistic contrast through alternation of input modes and lexical content over repeated correction attempts. Second, when correcting with verbatim speech, they increase hyperarticulation by lengthening speech segments and pauses, and increasing the use of final falling contours. Third, when they hyperarticulate, users simultaneously suppress linguistic variability in their speech signal's amplitude and fundamental frequency. These findings are discussed from the perspective of enhancement of linguistic intelligibility. Implications are also discussed for corroboration and generalization of the Computer-elicited Hyperarticulate Adaptation Model (CHAM), and for improved error handling capabilities in next-generation spoken language and multimodal systems.
Elosua Oliden, Paula; Mujika Lizaso, Josu
When different languages co-exist in one area, or when one person speaks more than one language, the impact of language on psychological and educational assessment processes can be considerable. The aim of this work was to study the impact of testing language in a community with two official languages: Spanish and Basque. By taking the PISA 2009 Reading Comprehension Test as a basis for analysis, four linguistic groups were defined according to the language spoken at home and the test language. Psychometric equivalence between test forms and differences in results among the four language groups were analyzed. The comparison of competence means took into account the effects of the index of socioeconomic and cultural status (ISEC) and gender. One reading unit with differential item functioning was detected. The reading competence means were considerably higher in the monolingual Spanish-Spanish group. No differences were found between the language groups based on family language when the test was conducted in Basque. The study illustrates the importance of taking into account psychometric, linguistic and sociolinguistic factors in linguistically diverse assessment contexts.
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
Brown, C.M.; Berkum, J.J.A. van; Hagoort, P.
A study is presented on the effects of discourse-semantic and lexical-syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior
Kees de Bot
Full Text Available Human behavior is not constant over the hours of the day, and there are considerable individual differences. Some people raise early and go to bed early and have their peek performance early in the day (“larks” while others tend to go to bed late and get up late and have their best performance later in the day (“owls”. In this contribution we report on three projects on the role of chronotype (CT in language processing and learning. The first study (de Bot, 2013 reports on the impact of CT on language learning aptitude and word learning. The second project was reported in Fang (2015 and looks at CT and executive functions, in particular inhibition as measured by variants of the Stroop test. The third project aimed at assessing lexical access in L1 and L2 at preferred and non-preferred times of the day. The data suggest that there are effects of CT on language learning and processing. There is a small effect of CT on language aptitude and a stronger effect of CT on lexical access in the first and second language. The lack of significance for other tasks is mainly caused by the large interindividual and intraindividual variation.
Smith, Ann Marie
This case study explores seventh grade students' experiences with writing and performing poetry. Teacher and student interviews along with class observations provide insight into how the teacher and students viewed spoken word poetry and identity. The researcher recommends practices for the teaching of critical literacy using spoken word and…
Task-based language teaching (TBLT) is an important second language teaching method. Planning is one of the significant factors in the studies of TBLT. This paper will mainly discuss the influence of planning on students' language performance in TBLT.
Qu, Qingqing; Damian, Markus F
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
Lewina O Lee
Full Text Available Background Good pulmonary function (PF is associated with preservation of cognitive performance, primarily of executive functions, in aging (Albert et al., 1995; Chyou et al., 1996; Emery, Finkel, & Pedersen, 2012; Yohannes & Gindo, 2013. The contribution of PF to older adults’ language abilities, however, has never been explored, to our knowledge. We addressed this gap by examining the effects of PF on older adults’ language functions, as measured by naming and sentence processing accuracy. We predicted similar effects as found for executive functions, given the positive associations between executive functions and sentence processing in aging (e.g., Goral et al., 2011. Methods Data were collected from 190 healthy adults aged 55 to 84 years (M = 71.1, SD = 8.1, with no history of neurological or psychiatric disorders. Procedure PF was measured prior to language testing. Measures included forced expiratory volume in 1 second (FEV1 and forced vital capacity (FVC. Language functions were assessed through performance on computer-administered lexical retrieval and sentence processing tasks. Sentence processing was measured using two auditory comprehension tasks: one, of embedded sentences (ES, the other, of sentences with multiple negatives (MN. Lexical retrieval was measured using the Boston Naming Test (BNT and Action Naming Test (ANT. Performance was scored for percent accuracy. Additionally, lexical retrieval was evaluated with a phonemic fluency task (FAS, which also taps executive function abilities. Statistical Analyses Multiple regression was used to examine the association between pulmonary and language functions, adjusting for age, education, gender, history of respiratory illness, current level of physical activities, and current and past smoking. Results Better PF was associated with better sentence processing and lexical retrieval on naming tasks, but not with phonemic fluency, after adjusting for covariates. Higher FVC was
Méndez Orellana, Carolina P; van de Sandt-Koenderman, Mieke E; Saliasi, Emi; van der Meulen, Ineke; Klip, Simone; van der Lugt, Aad; Smits, Marion
Melodic Intonation Therapy (MIT) uses the melodic elements of speech to improve language production in severe nonfluent aphasia. A crucial element of MIT is the melodically intoned auditory input: the patient listens to the therapist singing a target utterance. Such input of melodically intoned language facilitates production, whereas auditory input of spoken language does not. Using a sparse sampling fMRI sequence, we examined the differential auditory processing of spoken and melodically intoned language. Nineteen right-handed healthy volunteers performed an auditory lexical decision task in an event related design consisting of spoken and melodically intoned meaningful and meaningless items. The control conditions consisted of neutral utterances, either melodically intoned or spoken. Irrespective of whether the items were normally spoken or melodically intoned, meaningful items showed greater activation in the supramarginal gyrus and inferior parietal lobule, predominantly in the left hemisphere. Melodically intoned language activated both temporal lobes rather symmetrically, as well as the right frontal lobe cortices, indicating that these regions are engaged in the acoustic complexity of melodically intoned stimuli. Compared to spoken language, melodically intoned language activated sensory motor regions and articulatory language networks in the left hemisphere, but only when meaningful language was used. Our results suggest that the facilitatory effect of MIT may - in part - depend on an auditory input which combines melody and meaning. Combined melody and meaning provide a sound basis for the further investigation of melodic language processing in aphasic patients, and eventually the neurophysiological processes underlying MIT.
Lipski, John M.
The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)
Time-compressed spoken words enhance driving performance in complex visual scenarios : evidence of crossmodal semantic priming effects in basic cognitive experiments and applied driving simulator studies
Would speech warnings be a good option to inform drivers about time-critical traffic situations? Even though spoken words take time until they can be understood, listening is well trained from the earliest age and happens quite automatically. Therefore, it is conceivable that spoken words could immediately preactivate semantically identical (but physically diverse) visual information, and thereby enhance respective processing. Interestingly, this implies a crossmodal semantic effect of audito...
Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.
Wiseheart, Rebecca; Altmann, Lori J P
Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.
Mishra, Ramesh Kumar; Singh, Niharika
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard
Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...
Richards, Jack C.
In order to plan for the professional development of English language teachers, we need to have a comprehensive understanding of what competence and expertise in language teaching consists of. What essential skills, knowledge, values, attitudes and goals do language teachers need, and how can these be acquired? This paper seeks to explore these…
Flores, Glenn; Tomany-Korman, Sandra C
Fifty-five million Americans speak a non-English primary language at home, but little is known about health disparities for children in non-English-primary-language households. Our study objective was to examine whether disparities in medical and dental health, access to care, and use of services exist for children in non-English-primary-language households. The National Survey of Childhood Health was a telephone survey in 2003-2004 of a nationwide sample of parents of 102 353 children 0 to 17 years old. Disparities in medical and oral health and health care were examined for children in a non-English-primary-language household compared with children in English- primary-language households, both in bivariate analyses and in multivariable analyses that adjusted for 8 covariates (child's age, race/ethnicity, and medical or dental insurance coverage, caregiver's highest educational attainment and employment status, number of children and adults in the household, and poverty status). Children in non-English-primary-language households were significantly more likely than children in English-primary-language households to be poor (42% vs 13%) and Latino or Asian/Pacific Islander. Significantly higher proportions of children in non-English-primary-language households were not in excellent/very good health (43% vs 12%), were overweight/at risk for overweight (48% vs 39%), had teeth in fair/poor condition (27% vs 7%), and were uninsured (27% vs 6%), sporadically insured (20% vs 10%), and lacked dental insurance (39% vs 20%). Children in non-English-primary-language households more often had no usual source of medical care (38% vs 13%), made no medical (27% vs 12%) or preventive dental (14% vs 6%) visits in the previous year, and had problems obtaining specialty care (40% vs 23%). Latino and Asian children in non-English-primary-language households had several unique disparities compared with white children in non-English-primary-language households. Almost all disparities
Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G
Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Mills, Monique T.
Purpose: This study investigated the fictional narrative performance of school-age African American children across 3 elicitation contexts that differed in the type of visual stimulus presented. Method: A total of 54 children in Grades 2 through 5 produced narratives across 3 different visual conditions: no visual, picture sequence, and single…
Pfau, R.; Steinbach, M.; Woll, B.
Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
Christine E Potter
Full Text Available Five- and six-year-old children (n=160 participated in three studies designed to explore language discrimination. After an initial exposure period (during which children heard either an unfamiliar language, a familiar language, or music, children performed an ABX discrimination task involving two unfamiliar languages that were either similar (Spanish vs. Italian or different (Spanish vs. Mandarin. On each trial, participants heard two sentences spoken by two individuals, each spoken in an unfamiliar language. The pair was followed by a third sentence spoken in one of the two languages. Participants were asked to judge whether the third sentence was spoken by the first speaker or the second speaker. Across studies, both the difficulty of the discrimination contrast and the relation between exposure and test materials affected children’s performance. In particular, language discrimination performance was facilitated by an initial exposure to a different unfamiliar language, suggesting that experience can help tune children’s attention to the relevant features of novel languages.
González-Alvarez, Julio; Palomar-García, María-Angeles
Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.
Academic performance at universities in South Africa is a cause of concern. It is widely acknowledged that there are a variety of factors that contribute to poor academic performance, but language is regarded as one of the most important issues in this discussion. In this article, the relationship between language and ...
Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)
Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na
Callejas, Zoraida; Griol, David; López-Cózar, Ramón
In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.
Full Text Available Abstract In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.
Dr. Dexter R. Buted
Full Text Available The study generally intended to reckon the previous and present condition of senior tourism students with regards on their foreign language class. Specifically, it described the profile of the professors teaching foreign language; determined the senior tourism student’s performances on their foreign language class; assessed the teaching strategies used by the professors; tested the significant relationship between the performances of the students to the teaching strategies used; and lastly, proposed an action plan to help tourism students in the study of foreign language. The researchers used the descriptive method of research, with one hundred seventy-eight (178 respondents composed of all senior tourism students who are enrolled in foreign language class. The result of the study revealed that the professors who are teaching foreign language are 61 years old and above, masters degree holder, 10 years and above, with a unit of 21 and can speak Spanish. Also, the students are able to speak and comprehend Mandarin, French and Spanish. The teaching techniques used by the professors in teaching the language was giving and evaluating student’s performance more often. Moreover, the performances of the students in foreign language are affected by the teaching strategies used by the professors. And a proposed plan was formulated to improve foreign language subject of the study
Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan
How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349
Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E
The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.
Hoedemaekers, C.; Keegan, A.
We draw on Lacan’s notion of language to study employee subjectivity in a public sector organization (Publica) in the Netherlands. Our main contribution lies in using Lacan’s theorization of language and subjectivity as a basis for a detailed textual analysis of how local organizational discourses
McKenzie, Lolita D.
English language learners (ELLs) spend a majority of their instructional time in mainstream classrooms with mainstream teachers. Reading is an area with which many ELLs are challenged when placed within mainstream classrooms. Scaffolding has been identified as one of the best teaching practices for helping students read. ELL students in a local…
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena
The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…
Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.
Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,
English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…
In spite of the vast numbers of articles devoted to vocabulary acquisition in a foreign language, few studies address the contribution of lexical knowledge to spoken fluency. The present article begins with basic definitions of the temporal characteristics of oral fluency, summarizing L1 research over several decades, and then presents fluency…
Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…
de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.
Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and
Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...
Lestari, Dessi Puji; Furui, Sadaoki
Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.
Ricketts, Jessie; Dockrell, Julie E; Patel, Nita; Charman, Tony; Lindsay, Geoff
This experiment investigated whether children with specific language impairment (SLI), children with autism spectrum disorders (ASD), and typically developing children benefit from the incidental presence of orthography when learning new oral vocabulary items. Children with SLI, children with ASD, and typically developing children (n=27 per group) between 8 and 13 years of age were matched in triplets for age and nonverbal reasoning. Participants were taught 12 mappings between novel phonological strings and referents; half of these mappings were trained with orthography present and half were trained with orthography absent. Groups did not differ on the ability to learn new oral vocabulary, although there was some indication that children with ASD were slower than controls to identify newly learned items. During training, the ASD, SLI, and typically developing groups benefited from orthography to the same extent. In supplementary analyses, children with SLI were matched in pairs to an additional control group of younger typically developing children for nonword reading. Compared with younger controls, children with SLI showed equivalent oral vocabulary acquisition and benefit from orthography during training. Our findings are consistent with current theoretical accounts of how lexical entries are acquired and replicate previous studies that have shown orthographic facilitation for vocabulary acquisition in typically developing children and children with ASD. We demonstrate this effect in SLI for the first time. The study provides evidence that the presence of orthographic cues can support oral vocabulary acquisition, motivating intervention approaches (as well as standard classroom teaching) that emphasize the orthographic form. Copyright © 2015 Elsevier Inc. All rights reserved.
resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within
Given the nature of spoken text, the first requirement of an appropriate grammar is its ability to account for stretches of language (including recurring types of text or genres), in addition to clause level patterns. Second, the grammatical model needs to be part of a wider theory of language that recognises the functional nature and educational purposes of spoken text. The model also needs to be designed in a\\ud sufficiently comprehensive way so as to account for grammatical forms in speech...
Gerber, Ans; Engelbrecht, Johann; Harding, Ansie; Rogan, John
Understanding abstract concepts and ideas in mathematics, if instruction takes place in the first language of the student, is difficult. Yet worldwide students often have to master mathematics via a second or third language. The majority of students in South Africa — a country with eleven official languages — has to face this difficulty. In a quantitative study of first year calculus students, we investigated two groups of students. For one group tuition took place in their home language; for the second group, tuition was in English, a second or even a third language. Performance data on their secondary mathematics and first year tertiary calculus were analysed. The study showed that there was no significant difference between the adjusted means of the entire group of first language learners and the entire group of second language learners. Neither was there any statistically significant difference between the performances of the two groups of second language learners (based on the adjusted means). Yet, there did seem to be a significant difference between the achievement of Afrikaans students attending Afrikaans lectures and Afrikaans students attending English lectures.
Full Text Available This paper frames a novel methodology for spoken document information retrieval to the spontaneous speech corpora and converting the retrieved document into the corresponding language text. The proposed work involves the three major areas namely spoken keyword detection, spoken document retrieval and automatic speech recognition. The keyword spotting is concerned with the exploit of the distribution capturing capability of the Auto Associative Neural Network (AANN for spoken keyword detection. It involves sliding a frame-based keyword template along the audio documents and by means of confidence score acquired from the normalized squared error of AANN to search for a match. This work benevolences a new spoken keyword spotting algorithm. Based on the match the spoken documents are retrieved and clustered together. In speech recognition step, the retrieved documents are converted into the corresponding language text using the AANN classifier. The experiments are conducted using the Dravidian language database and the results recommend that the proposed method is promising for retrieving the relevant documents of a spoken query as a key and transform it into the corresponding language.
Schillingmann, Lars; Ernst, Jessica; Keite, Verena; Wrede, Britta; Meyer, Antje S; Belke, Eva
In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool's performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.
Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina; Jensen, Jørgen Hedegaard
The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest a very clear benefit of spoken language communication with a cochlear implanted child. The aim of the study was to identify factors associated with speech and language outcomes for cochlear implanted children and also to estimate the effect-related odds ratio for each factor in relation to the children's speech and language performances. Data relate to 155 prelingually deafened children with cochlear implant (CI). A test battery consisting of six different speech and language tests/assessments was used. Seven different factors were considered, i.e. hearing age, implantation age, gender, educational placement, ear of implantation, CI center, and communication mode. Logistic regression models and proportional odds models were used to analyze the relationship between the considered factors and test responses. The communication mode at home proved essential to speech and language outcome, as children exposed to spoken language had markedly better odds of performing well in all tests, compared with children exposed to a mixture of spoken language and sign support, or sign language.
Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting
Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…
Isaki, Emi; Spaulding, Tammie J; Plante, Elena
The purpose of this study is to investigate the performance of adults with language-based learning disorders (L/LD) and normal language controls on verbal short-term and verbal working memory tasks. Eighteen adults with L/LD and 18 normal language controls were compared on verbal short-term memory and verbal working memory tasks under low, moderate, and high linguistic processing loads. Results indicate no significant group differences on all verbal short-term memory tasks and verbal working memory tasks with low and moderate language loads. Statistically significant group differences were found on the most taxing condition, the verbal working memory task involving high language processing load. The L/LD group performed significantly worse than the control group on both the processing and storage components of this task. These results support the limited capacity hypothesis for adults with L/LD. Rather than presenting with a uniform impairment in verbal memory, they exhibit verbal memory deficits only when their capacity limitations are exceeded under relatively high combined memory and language processing demands. The reader will (1) understand the relationship between increased linguistic demands and working memory, and (2) learn about working memory skills in adults with language learning disorders.
I AD-A117 8 DEFENSErLANGUAGE INST LACKLAND AFB TX ENGLISH LANGUA--ETC FIG 5/iS JOB LANGUAGE PERFORMANCE REQUIREMENTS FOR PRE-BT EXTENDED ENGLI ...muzzle. A2-7 VERBS 1. VERB TENSES Make sure you clearly understand the task you are to teach . --You will be tested. --If they elect to take it, they must...mask. A2-10 LIST OF LEXICAL AND STRUCTURAL ITEMS FOR ENGLISH LANGUAGE STRUCTURES Sentences: A. Declarative statement B. Interrogative question 1. wh
Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery
Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.
Guapacha Chamorro, Maria Eugenia; Benavidez Paz, Luis Humberto
This paper reports an action-research study on language learning strategies in tertiary education at a Colombian university. The study aimed at improving the English language performance and language learning strategies use of 33 first-year pre-service language teachers by combining elements from two models: the cognitive academic language…
van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…
Brimo, Danielle; Lund, Emily; Sapp, Alysha
Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below
In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and accurate use. Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...
Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford
We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.
I Nengah Sudipa
Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.
Chen, Wei; Mostow, Jack; Aist, Gregory
Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…
Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf
Maria Eugenia Guapacha Chamorro
Full Text Available This paper reports an action-research study on language learning strategies in tertiary education at a Colombian university. The study aimed at improving the English language performance and language learning strategies use of 33 first-year pre-service language teachers by combining elements from two models: the cognitive academic language learning approach and task-based language teaching. Data were gathered through surveys, a focus group, students’ and teachers’ journals, language tests, and documentary analysis. Results evidenced that the students improved in speaking, writing, grammar, vocabulary and in their language learning strategies repertoire. As a conclusion, explicit strategy instruction in the proposed model resulted in a proper combination to improve learners’ language learning strategies and performance.
Marti, U-V.; Bunke, H.
In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different
Kowal, Sabine; O'Connell, Daniel C
The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.
Mast, Marion; Maier, Elisabeth; Schmitz, Birte
This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...
Kormos, Judit; Safar, Anna
In our research we addressed the question what the relationship is between phonological short-term and working memory capacity and performance in an end-of-year reading, writing, listening, speaking and use of English test. The participants of our study were 121 secondary school students aged 15-16 in the first intensive language training year of…
The purpose of this study is to examine the potential of social networking sites for autonomous language learners, specifically the role of hashtag literacies in learners' affiliation performances with native speakers. Informed by ecological approach and guided by Zappavigna's (2012) concepts of "searchable talk" and "ambient…
Alotaibi, Yousef Ajami; Hussain, Amir
Arabic is one of the world's oldest languages and is currently the second most spoken language in terms of number of speakers. However, it has not received much attention from the traditional speech processing research community. This study is specifically concerned with the analysis of vowels in modern standard Arabic dialect. The first and second formant values in these vowels are investigated and the differences and similarities between the vowels are explored using consonant-vowels-consonant (CVC) utterances. For this purpose, an HMM based recognizer was built to classify the vowels and the performance of the recognizer analyzed to help understand the similarities and dissimilarities between the phonetic features of vowels. The vowels are also analyzed in both time and frequency domains, and the consistent findings of the analysis are expected to facilitate future Arabic speech processing tasks such as vowel and speech recognition and classification.
Yoder, Paul J; Woynaroski, Tiffany; Fey, Marc E; Warren, Steven F; Gardner, Elizabeth
In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only the participants with DS, we found that more therapy led to larger spoken vocabularies at posttreatment because it increased children's canonical syllabic communication and receptive vocabulary growth early in the treatment phase.
Shawer, Saad Fathy
This article examines the differences in language learning strategies (LLS) use between preservice teachers of English as a foreign language (EFL) and Arabic as a second language (ASL). It also examines the relationship between LLS use and language performance (academic achievement and four language skills) among ASL students. The study made use…
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua
Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word
Munro, Natalie; Lee, Kerrie; Baker, Elise
Preschool and early school-aged children with specific language impairment not only have spoken language difficulties, but also are at risk of future literacy problems. Effective interventions targeting both spoken language and emergent literacy skills for this population are limited. This paper reports a feasibility study of a hybrid language intervention approach that targets vocabulary knowledge and phonological awareness skills within the context of oral narrative, storybook reading, and drill-based games. This study also reports on two novel, experimental assessments that were developed to expand options for measuring changes in lexical skills in children. Seventeen children with specific language impairment participated in a pilot within-group evaluation of a hybrid intervention programme. The children's performance at pre- and post-intervention was compared on a range of clinical and experimental assessment measures targeting both spoken language and phonological awareness skills. Each child received intervention for six one-hour sessions scheduled on a weekly basis. Intervention sessions focused on training phonological awareness skills as well as lexical-semantic features of words within the context of oral and storybook narrative and drill-based games. The children significantly improved on clinical measures of phonological awareness, spoken vocabulary and oral narrative. Lexical-semantic and sublexical vocabulary knowledge also significantly improved on the experimental measures used in the study. The results of this feasibility study suggest that a larger scale experimental trial of an integrated spoken language and emergent literacy intervention approach for preschool and early school-aged children with specific language impairment is warranted.
Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.
Mills, Brian D; Lai, Janie; Brown, Timothy T; Erhart, Matthew; Halgren, Eric; Reilly, Judy; Appelbaum, Mark; Moses, Pamela
This study examined the relationship between magnetic resonance imaging (MRI)-based measures of gray matter structure and morphosyntax production in a spoken narrative in 17 typical children (TD) and 11 children with high functioning autism (HFA) between 6 and 13 years of age. In the TD group, cortical structure was related to narrative performance in the left inferior frontal gyrus (Broca's area), the right middle frontal sulcus, and the right inferior temporal sulcus. No associations were found in children with HFA. These findings suggest a systematic coupling between brain structure and spontaneous language in TD children and a disruption of these relationships in children with HFA.
It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the
Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.
Ludke, Karen M; Ferreira, Fernanda; Overy, Katie
This study presents the first experimental evidence that singing can facilitate short-term paired-associate phrase learning in an unfamiliar language (Hungarian). Sixty adult participants were randomly assigned to one of three "listen-and-repeat" learning conditions: speaking, rhythmic speaking, or singing. Participants in the singing condition showed superior overall performance on a collection of Hungarian language tests after a 15-min learning period, as compared with participants in the speaking and rhythmic speaking conditions. This superior performance was statistically significant (p learning method can facilitate verbatim memory for spoken foreign language phrases.
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…
Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...
Full Text Available One of the thorniest issues of the linguistic policies in North-Western Transylvania was the establishment of linguistic arrangements in the mandatory education system. Supported also by arguments which refer to the equal opportunity rule, native language education for ethnic minority children, concerning especially the Hungarian language, has flourished both in mixed schools, and in schools that are segregated in terms of the teaching language. This article assesses the effects that the various linguistic contexts in which a Hungarian teenager in the region studies can have on his/her performance in school, compared to the performance of his/her colleagues who study in Romanian. I start from two hypotheses: linguistic shortcoming and opposition culture. The hierarchical linear regression modeling of the average of school results scores in the seventh grade, on a sample of over 3 700 pupils in the eighth grade in Bihor County, indicates a significant and systematic disadvantage of Hungarian pupils, although small as an absolute value, irrespective of the linguistic arrangement. Moreover, both the linguistic disadvantage thesis and the opposition culture model can be supported, although the results are not conclusive from this point of view.
Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas
Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.
Sulaiman, Norazean; Muhammad, Ahmad Mazli; Ganapathy, Nurul Nadiah Dewi Faizul; Khairuddin, Zulaikha; Othman, Salwa
Listening is a very crucial skill to be learnt in second language classroom because it is essential for the development of spoken language proficiency (Hamouda, 2013). The aim of this study is to investigate the significant differences in terms of students' performance when using traditional (audio-only) method and video media method. The data of…
Full Text Available Th e characterization of metonymy as a conceptual tool for guiding inferencing in language has opened a new fi eld of study in cognitive linguistics and pragmatics. To appreciate the value of metonymy for pragmatic inferencing, metonymy should not be viewed as performing only its prototypical referential function. Metonymic mappings are operative in speech acts at the level of reference, predication, proposition and illocution. Th e aim of this paper is to study the role of metonymy in pragmatic inferencing in spoken discourse in televison interviews. Case analyses of authentic utterances classifi ed as illocutionary metonymies following the pragmatic typology of metonymic functions are presented. Th e inferencing processes are facilitated by metonymic connections existing between domains or subdomains in the same functional domain. It has been widely accepted by cognitive linguists that universal human knowledge and embodiment are essential for the interpretation of metonymy. Th is analysis points to the role of cultural background knowledge in understanding target meanings. All these aspects of metonymic connections are exploited in complex inferential processes in spoken discourse. In most cases, metaphoric mappings are also a part of utterance interpretation.
Li, Le; Abutalebi, Jubin; Zou, Lijuan; Yan, Xin; Liu, Lanfang; Feng, Xiaoxia; Wang, Ruiming; Guo, Taomei; Ding, Guosheng
Previous neuroimaging studies have revealed that bilingualism induces both structural and functional neuroplasticity in the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (LCN), both of which are associated with cognitive control. Since these "control" regions should work together with other language regions during language processing, we hypothesized that bilingualism may also alter the functional interaction between the dACC/LCN and language regions. Here we tested this hypothesis by exploring the functional connectivity (FC) in bimodal bilinguals and monolinguals using functional MRI when they either performed a picture naming task with spoken language or were in resting state. We found that for bimodal bilinguals who use spoken and sign languages, the FC of the dACC with regions involved in spoken language (e.g. the left superior temporal gyrus) was stronger in performing the task, but weaker in the resting state as compared to monolinguals. For the LCN, its intrinsic FC with sign language regions including the left inferior temporo-occipital part and right inferior and superior parietal lobules was increased in the bilinguals. These results demonstrate that bilingual experience may alter the brain functional interaction between "control" regions and "language" regions. For different control regions, the FC alters in different ways. The findings also deepen our understanding of the functional roles of the dACC and LCN in language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
This article will examine the potential for language change from the bottom-up given the new domains in which minority languages are present as a result of the process of language mobility. Drawing on a theoretical notion of sociolinguistic scales, this article aims to discuss how the position of the Irish language has been reconfigured. From this…
The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…
Deters, Kacie D; Nho, Kwangsik; Risacher, Shannon L; Kim, Sungeun; Ramanan, Vijay K; Crane, Paul K; Apostolova, Liana G; Saykin, Andrew J
Language impairment is common in prodromal stages of Alzheimer's disease (AD) and progresses over time. However, the genetic architecture underlying language performance is poorly understood. To identify novel genetic variants associated with language performance, we analyzed brain MRI and performed a genome-wide association study (GWAS) using a composite measure of language performance from the Alzheimer's Disease Neuroimaging Initiative (ADNI; n=1560). The language composite score was associated with brain atrophy on MRI in language and semantic areas. GWAS identified GLI3 (GLI family zinc finger 3) as significantly associated with language performance (pbrain structures, as a putative gene associated with language dysfunction in AD. Copyright © 2017 Elsevier Inc. All rights reserved.
Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.
Watanabe, Shigeru; Yamamoto, Erico; Uozumi, Midori
Java sparrows (Padda oryzivora) were trained to discriminate English from Chinese spoken by a bilingual speaker. They could learn discrimination and showed generalization to new sentences spoken by the same speaker and those spoken by a new speaker. Thus, the birds distinguished between English and Chinese. Although auditory cues for the discrimination were not specified, this is the first evidence that non-mammalian species can discriminate human languages.
Delano, Monica E
The effects of a multicomponent intervention involving self-regulated strategy development delivered via video self-modeling on the written language performance of 3 students with Asperger syndrome were examined. During intervention sessions, each student watched a video of himself performing strategies for increasing the number of words written and the number of functional essay elements. He then wrote a persuasive essay. The number of words written and number of functional essay elements included in each essay were measured. Each student demonstrated gains in the number of words written and number of functional essay elements. Maintenance of treatment effects at follow-up varied across targets and participants. Implications for future research are suggested. PMID:17624076
Wilson, K. Ryan; O'Rourke, Heather; Wozniak, Linda A.; Kostopoulos, Ellina; Marchand, Yannick; Newman, Aaron J.
Our goal was to characterize the effects of intensive aphasia therapy on the N400, an electrophysiological index of lexical-semantic processing. Immediately before and after 4 weeks of intensive speech-language therapy, people with aphasia performed a task in which they had to determine whether spoken words were a "match" or a "mismatch" to…
Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559
Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.
Arndt, Karen Barako; Schuele, C. Melanie
Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…
Casey, Laura Baylot; Bicard, David F.
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
Crume, Peter K
The National Reading Panel emphasizes that spoken language phonological awareness (PA) developed at home and school can lead to improvements in reading performance in young children. However, research indicates that many deaf children are good readers even though they have limited spoken language PA. Is it possible that some deaf students benefit from teachers who promote sign language PA instead? The purpose of this qualitative study is to examine teachers' beliefs and instructional practices related to sign language PA. A thematic analysis is conducted on 10 participant interviews at an ASL/English bilingual school for the deaf to understand their views and instructional practices. The findings reveal that the participants had strong beliefs in developing students' structural knowledge of signs and used a variety of instructional strategies to build students' knowledge of sign structures in order to promote their language and literacy skills.
Full Text Available This research entitle English foreign language learners kinesics on teaching performance aims to mention and to describe the forms and the function of kinesics used by EFL learners on teaching performance, and to describe the importance of kinesics in teaching activity. This research is descriptive qualitative research. The data of the research are taken from EFL learners’ teaching performance on sixth semester at STKIP PGRI Bandar Lampung. The researcher observes the learners’ kinesics in teaching activity by using observing method and noting technique. In analyzing the data, the researcher uses description method. The result shows that there are twenty kinds of kinesics acted by the trainee, those are sitting in relaxing, arms crossed in front of the chest, standing in relaxing, walking around the class, checking the time, stroking the chin or beard, smile, happily surprised, wrinkle forehead, nodding head, shaking head, thumbs up, pointing finger, counting hand, waving hand, looking up, eye following, squinting, look in eye and breaking or making eye contact. Keywords: Kinesics, EFL Learners, Teaching Performance
Juuso, Esko K.
Performance improvement is taken as the primary goal in the asset management. Advanced data analysis is needed to efficiently integrate condition monitoring data into the operation and maintenance. Intelligent stress and condition indices have been developed for control and condition monitoring by combining generalized norms with efficient nonlinear scaling. These nonlinear scaling methodologies can also be used to handle performance measures used for management since management oriented indicators can be presented in the same scale as intelligent condition and stress indices. Performance indicators are responses of the process, machine or system to the stress contributions analyzed from process and condition monitoring data. Scaled values are directly used in intelligent temporal analysis to calculate fluctuations and trends. All these methodologies can be used in prognostics and fatigue prediction. The meanings of the variables are beneficial in extracting expert knowledge and representing information in natural language. The idea of dividing the problems into the variable specific meanings and the directions of interactions provides various improvements for performance monitoring and decision making. The integrated temporal analysis and uncertainty processing facilitates the efficient use of domain expertise. Measurements can be monitored with generalized statistical process control (GSPC) based on the same scaling functions.
Klem, Marianne; Hagtvet, Bente; Hulme, Charles; Gustafsson, Jan-Eric
Purpose: This study investigated the stability and growth of preschool language skills and explores latent class analysis as an approach for identifying children at risk of language impairment. Method: The authors present data from a large-scale 2-year longitudinal study, in which 600 children were assessed with a language-screening tool…
Nava, Andrea; Pedrazzini, Luciana
We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…
Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra
-, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf
Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth
In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…
With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.
Full Text Available Background and Aim: Specific language impairment (SLI is one of the most prevalent developmental language disorders which is less considered in Persian researches. The aim of this study was to investigate the differences in some morpho-syntactic features of speech and other language skills between Persian children with specific language impairment and their normal age-matched peers. Moreover, the usefulness of the test of language development-3 (TOLD-3, Persian version, as a tool in identifing Persian-speaking children with this impairment, was investigated.Methods: In a case-control study, the results of the test of language development and speech samples analysis of 13 Persian-speaking children (5 to 7 years old with specific language impairment were compared with 13 age-matched normal children.Results: The results of this study showed that there were significant differences between the scores of specific language impairment group and control group in all measured aspects of the TOLD-3 (p<0.001; the children with specific language impairment had a shorter mean length of utterance (p<0.001 and made less use of functional words in their speech (p=0.002 compared with their peers.Conclusion: Such as specific language impairment children in other languages, all language abilities of Persian-speaking children with specific language impairment are less than expected stage for their age. Furthermore, the Persian version of TOLD-3 is a useful assessment instrument in identifying children with specific language impairment which is comparable to the other languages.
Pfau, R.; Steinbach, M.; Pfau, R.; Steinbach, M.; Herrmann, A.
Sign language grammars, just like spoken language grammars, generally provide various means to generate different kinds of complex syntactic structures including subordination of complement clauses, adverbial clauses, or relative clauses. Studies on various sign languages have revealed that sign
This book covers language modeling and automatic speech recognition for inflective languages (e.g. Slavic languages), which represent roughly half of the languages spoken in Europe. These languages do not perform as well as English in speech recognition systems and it is therefore harder to develop an application with sufficient quality for the end user. The authors describe the most important language features for the development of a speech recognition system. This is then presented through the analysis of errors in the system and the development of language models and their inclusion in speech recognition systems, which specifically address the errors that are relevant for targeted applications. The error analysis is done with regard to morphological characteristics of the word in the recognized sentences. The book is oriented towards speech recognition with large vocabularies and continuous and even spontaneous speech. Today such applications work with a rather small number of languages compared to the nu...
Gilkerson, Jill; Zhang, Yiwen; Xu, Dongxin; Richards, Jeffrey A.; Xu, Xiaojuan; Jiang, Fan; Harnsberger, James; Topping, Keith
Purpose: The purpose of this study was to evaluate performance of the Language Environment Analysis (LENA) automated language-analysis system for the Chinese Shanghai dialect and Mandarin (SDM) languages. Method: Volunteer parents of 22 children aged 3-23 months were recruited in Shanghai. Families provided daylong in-home audio recordings using…
Pyburn, Daniel T.; Pazicni, Samuel; Benassi, Victor A.; Tappin, Elizabeth E.
Few studies have focused specifically on the role that language plays in learning chemistry. We report here an investigation into the ability of language comprehension measures to predict performance in university introductory chemistry courses. This work is informed by theories of language comprehension, which posit that high-skilled…
This study used structural equation modeling to explore the possible causal relations between foreign language (English) listening anxiety and English listening performance. Three hundred participants learning English as a foreign language (FL) completed the foreign language listening anxiety scale (FLLAS) and IELTS test twice with an interval of…
Jansma, Marrit; Minnaert, Alexander; Klinkenberg, Edwin
In this study, it was investigated whether third language teaching through Content and Language Integrated Learning (CLIL) was more effective than teaching a third language as an isolated subject. By means of a cross-sectional study design, English vocabulary, speaking performance and
Sibieta, Luke; Kotecha, Mehul; Skipp, Amy
The Nuffield Early Language Intervention is designed to improve the spoken language ability of children during the transition from nursery to primary school. It is targeted at children with relatively poor spoken language skills. Three sessions per week are delivered to groups of two to four children starting in the final term of nursery and…
... on populations and the numbers of people speaking each language. Features include: * * * * * nearly 600 languages identiﬁed as to where they are spoken and the family to which they belong over 200 languages individually described, with sample passages and English translation fascinating insights into the history and development of individual languages a...
Kimmelman, V.; Pfau, R.; Féry, C.; Ishihara, S.
This chapter demonstrates that the Information Structure notions Topic and Focus are relevant for sign languages, just as they are for spoken languages. Data from various sign languages reveal that, across sign languages, Information Structure is encoded by syntactic and prosodic strategies, often
Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.
This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often
Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.
This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often
Language ordinarily serves as a tool for communication. Communication is made effective by the individual‟s competence and ability in using any language. The reverse will, however be the case. Similarly, some other variables can enhance or mar communication. The variables which may aid or abate communication ...
Nelson, Lauri H.; Wright, Whitney; Parker, Elizabeth W.
Children who are Deaf and Hard of Hearing (DHH) using Listening and spoken language (LSL) as their primary mode of communication have emerged as a growing population in general education and special education classroom settings, and have educational performance expectations similar to their same aged hearing peers. Academic instruction that…
This study reports on the pattern of performance on spoken and written naming, spelling to dictation, and oral reading of single verbs and nouns in a bilingual speaker with aphasia in two first languages that differ in morphological complexity, orthographic transparency, and script: Greek (L1a) and English (L1b). The results reveal no verb/noun…
This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.
Sanden, Guro Refsum
Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...
Sherwood, Bruce Arne
Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)
Termos, Mohamad Hani
The Classroom Performance System (CPS) is an instructional technology that increases student performance and promotes active learning. This study assessed the effect of the CPS on student participation, attendance, and achievement in multicultural college-level anatomy and physiology classes, where students' first spoken language is not English.…
Full Text Available As an experienced teacher of advanced learners of English I am deeply aware of recurrent problems which these learners experience as regards grammatical accuracy. In this paper, I focus on researching inaccuracies in the use of verbal categories. I draw the data from a spoken learner corpus LINDSEI_CZ and analyze the performance of 50 advanced (C1–C2 learners of English whose mother tongue is Czech. The main method used is Computer-aided Error Analysis within the larger framework of Learner Corpus Research. The results reveal that the key area of difficulty is the use of tenses and tense agreements, and especially the use of the present perfect. Other error-prone aspects are also described. The study also identifies a number of triggers which may lie at the root of the problems. The identification of these triggers reveals deficiencies in the teaching of grammar, mainly too much focus on decontextualized practice, use of potentially confusing rules, and the lack of attempt to deal with broader notions such as continuity and perfectiveness. Whilst the study is useful for the teachers of advanced learners, its pedagogical implications stretch to lower levels of proficiency as well.
Laursen, Helle Pia
and conceptualizations of language and literacy in research on (second) language acquisition. When examining children’s first language acquisition, spoken language has been the primary concern in scholarship: a child acquires oral language first and written language follows later, i.e. language precedes literacy....... On the other hand, many second or foreign language learners learn mostly through written language or learn spoken and written language at the same time. Thus the connections between spoken and written (and visual) modalities, i.e. between language and literacy, are complex in research on language acquisition......Moving conceptualizations of language and literacy in SLA In this colloquium, we aim to problematize the concepts of language and literacy in the field that is termed “second language” research and seek ways to critically connect the terms. When considering current day language use for example...
Baker, Eva L.; And Others
Evaluation models are being developed for assessing artificial intelligence (AI) systems in terms of similar performance by groups of people. Natural language understanding and vision systems are the areas of concentration. In simplest terms, the goal is to norm a given natural language system's performance on a sample of people. The specific…
de Marcken, Carl
This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.
Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/
Socorro Cláudia Tavares de Sousa
Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.
Everts, Regula; Harvey, A Simon; Lillywhite, Leasha; Wrennall, Jacquie; Abbott, David F; Gonzalez, Linda; Kean, Michael; Jackson, Graeme D; Anderson, Vicki
Assessment of language dominance with functional magnetic resonance imaging (fMRI) and neuropsychological evaluation is often used prior to epilepsy surgery. This study explores whether language lateralization and cognitive performance are systematically related in young patients with focal epilepsy. Language fMRI and neuropsychological data (language, visuospatial functions, and memory) of 40 patients (7-18 years of age) with unilateral, refractory focal epilepsy in temporal and/or frontal areas of the left (n = 23) or right hemisphere (n = 17) were analyzed. fMRI data of 18 healthy controls (7-18 years) served as a normative sample. A laterality index was computed to determine the lateralization of activation in three regions of interest (frontal, parietal, and temporal). Atypical language lateralization was demonstrated in 12 (30%) of 40 patients. A correlation between language lateralization and verbal memory performance occurred in patients with left-sided epilepsy over all three regions of interest, with bilateral or right-sided language lateralization being correlated with better verbal memory performance (Word Pairs Recall: frontal r = -0.4, p = 0.016; parietal r = -0.4, p = 0.043; temporal r = -0.4, p = 0.041). Verbal memory performance made the largest contribution to language lateralization, whereas handedness and side of seizures did not contribute to the variance in language lateralization. This finding reflects the association between neocortical language and hippocampal memory regions in patients with left-sided epilepsy. Atypical language lateralization is advantageous for verbal memory performance, presumably a result of transfer of verbal memory function. In children with focal epilepsy, verbal memory performance provides a better idea of language lateralization than handedness and side of epilepsy and lesion.
Book review. Neurolinguistics. An Introduction to Spoken Language Processing and its Disorders, John Ingram. Cambridge University Press, Cambridge (Cambridge Textbooks in Linguistics) (2007). xxi + 420 pp., ISBN 978-0-521-79640-8 (pb)
The present textbook is one of the few recent textbooks in the area of neurolinguistics and will be welcomed by teachers of neurolinguistic courses as well as researchers interested in the topic. Neurolinguistics is a huge area, and the boundaries between psycho- and neurolinguistics are not sharp. Often the term neurolinguistics is used to refer to research involving neuropsychological patients suffering from some sort of language disorder or impairment. Also, the term neuro- rather than psy...
Saunders, William Robert; Grant, James; Müller, Eike Hermann
Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.
Damen, G.W.J.A.; Langereis, M.C.; Snik, A.F.M.; Chute, P.M.; Mylanus, E.A.M.
OBJECTIVE: Investigation of the relation between classroom performance and language development of cochlear implant (CI) students in mainstream education. Structural analyses of assessment of mainstream performance (AMP) and Screening Instrument For Targeting Educational Risk (SIFTER) instruments.
Nicola, K; Watter, P
This study investigated (1) the visual-motor integration (VMI) performance of children with severe specific language impairment (SLI), and any effect of age, gender, socio-economic status and concomitant speech impairment; and (2) the relationship between language and VMI performance. It is hypothesized that children with severe SLI would present with VMI problems irrespective of gender and socio-economic status; however, VMI deficits will be more pronounced in younger children and those with concomitant speech impairment. Furthermore, it is hypothesized that there will be a relationship between VMI and language performance, particularly in receptive scores. Children enrolled between 2000 and 2008 in a school dedicated to children with severe speech-language impairments were included, if they met the criteria for severe SLI with or without concomitant speech impairment which was verified by a government organization. Results from all initial standardized language and VMI assessments found during a retrospective review of chart files were included. The final study group included 100 children (males = 76), from 4 to 14 years of age with mean language scores at least 2SD below the mean. For VMI performance, 52% of the children scored below -1SD, with 25% of the total group scoring more than 1.5SD below the mean. Age, gender and the addition of a speech impairment did not impact on VMI performance; however, children living in disadvantaged suburbs scored significantly better than children residing in advantaged suburbs. Receptive language scores of the Clinical Evaluation of Language Fundamentals was the only score associated with and able to predict VMI performance. A small subgroup of children with severe SLI will also have poor VMI skills. The best predictor of poor VMI is receptive language scores on the Clinical Evaluation of Language Fundamentals. Children with poor receptive language performance may benefit from VMI assessment and multidisciplinary
The concept of consciousness, separate from that of vigilance, can be defined as the immediate knowledge of motor-perceptual activities on the cognitive assimilation of the real duration. The linguistic theories distinguish in the language first of all linguistic competence, it is grouping of signs or linguistic knowledge of the group, which one can compare to phenomenon of automatism, and secondly the spoken language or linguistic performance, a creative individual and voluntary act. The observation of aphasics and of a certain partial temporal epileptics permits to dissociate these two forms of language. That of the consciousness linked to the immediate observation of the self by ones' self could only be the creative word. The unconscious listener, separated from real time, without real creative ability, could only be the "-echo-souvenir" of the conscient person.
Schuit, J.; Baker, A.; Pfau, R.
Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different
This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…
In this paper, three (3) most commonly used programming languages C#, Vb.net and Java were compared both theoretically and empirically based on data security, data connectivity and data Transfer. The algorithms for each aforementioned criterion were implemented in a mobile device, specifically, Android and Window ...
Fuijkschot, J.; Maassen, B.A.M.; Gorter, J.W.; Gerven, M.H.J.C van; Willemsen, M.A.A.P.
OBJECTIVE: To describe speech-language pathology in patients with Sjogren-Larsson syndrome (SLS) in relation to their cognitive and motor impairment. DESIGN: Observational case series. METHODS: Cognitive functioning was assessed in 16 patients with SLS (nine males; seven females) using different
Full Text Available The contribution of cooperative learning (CL in promoting second and foreign language learning has been widely acknowledged. Little scholarly attention, however, has been given to revealing how this teaching method works and promotes learners’ improved communicative competence. This qualitative case study explores the important role that individual accountability in CL plays in giving English as a Foreign Language (EFL learners in Indonesia the opportunity to use the target language of English. While individual accountability is a principle of and one of the activities in CL, it is currently under studied, thus little is known about how it enhances EFL learning. This study aims to address this gap by conducting a constructivist grounded theory analysis on participant observation, in-depth interview, and document analysis data drawn from two secondary school EFL teachers, 77 students in the observed classrooms, and four focal students. The analysis shows that through individual accountability in CL, the EFL learners had opportunities to use the target language, which may have contributed to the attainment of communicative competence—the goal of the EFL instruction. More specifically, compared to the use of conventional group work in the observed classrooms, through the activities of individual accountability in CL, i.e., performances and peer interaction, the EFL learners had more opportunities to use spoken English. The present study recommends that teachers, especially those new to CL, follow the preset procedure of selected CL instructional strategies or structures in order to recognize the activities within individual accountability in CL and understand how these activities benefit students.
Cenoz, Jasone; Gorter, Durk
This paper focuses on the linguistic landscape of two streets in two multilingual cities in Friesland (Netherlands) and the Basque Country (Spain) where a minority language is spoken, Basque or Frisian. The paper analyses the use of the minority language (Basque or Frisian), the state language (Spanish or Dutch) and English as an international…
Qi, Cathy H.; Kaiser, Ann P.; Marley, Scott C.; Milan, Stephanie
The purposes of the study were to determine (a) the ability of two spontaneous language measures, mean length of utterance in morphemes (MLU-m) and number of different words (NDW), to identify African American preschool children at low and high levels of language ability; (b) whether child chronological age was related to the performance of either…
Mainela-Arnold, Elina; Misra, Maya; Miller, Carol; Poll, Gerard H.; Park, Ji Sook
Background: Children with poor language abilities tend to perform poorly on verbal working memory tasks. This result has been interpreted as evidence that limitations in working memory capacity may interfere with the development of a mature linguistic system. However, it is possible that language abilities, such as the efficiency of sentence…
Solano-Flores, Guillermo; Barnett-Clarke, Carne; Kachchaf, Rachel R.
We examined the performance of English language learners (ELLs) and non-ELLs on Grade 4 and Grade 5 mathematics content knowledge (CK) and academic language (AL) tests. CK and AL items had different semiotic loads (numbers of different types of semiotic features) and different semiotic structures (relative frequencies of different semiotic…
Akinwamide, Timothy Kolade
This study examined the influence of Process Approach on English as second language Students' performances in essay writing. The purpose was to determine how far this current global approach could be of assistance to the writing skill development of these bilingual speakers of English language. The study employed the pre-test post-test control…
Levine, Madlyn A.; Hanes, Michael L.
This study investigated the relationship between dialect usage and performance on four language tasks designed to reflect features developmental in nature: articulation, grammatical closure, auditory discrimination, and sentence comprehension. Predictor and criterion language tasks were administered to 90 kindergarten, first-, and second-grade…
This study examined the relationships among group size, participation, and learning performance factors when learning a programming language in a computer-supported collaborative learning (CSCL) context. An online forum was used as the CSCL environment for learning the Microsoft ASP.NET programming language. The collaborative-learning experiment…
Nip, Ignatius S. B.; Blumenfeld, Henrike K.
Purpose: Second-language (L2) production requires greater cognitive resources to inhibit the native language and to retrieve less robust lexical representations. The current investigation identifies how proficiency and linguistic complexity, specifically syntactic and lexical factors, influence speech motor control and performance. Method: Speech…
Yao, Y.; van Ours, J.C.
Many immigrants in the Netherlands have poor Dutch language skills. They face problems in speaking and reading Dutch. Our paper investigates how these prob- lems affect their labor market performance in terms of employment, hours of work and wages. We find that for female immigrants language
Jongbloed-Faber, L.; Van de Velde, H.; van der Meer, C.; Klinkenberg, E.L.
This paper explores the use of Frisian, a minority language spoken in the Dutch province of Fryslân, on social media by Frisian teenagers. Frisian is the mother tongue of 54% of the 650,000 inhabitants and is predominantly a spoken language: 64% of the Frisian population can speak it well, while
English Second Language, General, Special Education, and Speech/Language Personal Teacher Efficacy, English Language Arts Scientifically-Validated Intervention Practice, and Working Memory Development of English Language Learners in High and Low Performing Elementary Schools
Brown, Barbara J.
The researcher investigated teacher factors contributing to English language arts (ELA) achievement of English language learners (ELLs) over 2 consecutive years, in high and low performing elementary schools with a Hispanic/Latino student population greater than or equal to 30 percent. These factors included personal teacher efficacy, teacher…
Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.
Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…
Bedore, Lisa M; Peña, Elizabeth D; Mendez-Perez, Anita; Gillam, Ronald B
Purpose This study assesses the factors that contribute to Spanish and English language development in bilingual children. Method 757 Hispanic Pre-kindergarten and kindergarten age children completed screening tests of semantic and morphosyntactic development in Spanish and English. Parents provided information about their occupation and education as well as their children’s English and Spanish exposure. Data were analyzed using zero-inflated regression models (comprising a logistic regression component and a negative binomial or Poisson component) to explore factors that contributed to children initiating L1 and L2 performance and factors that contributed to building children’s knowledge. Results Factors that were positively associated with initiating L1 and L2 performance were language input/output, free and reduced lunch, and age. Factors associated with building knowledge included age, parent education, input/output, free and reduced lunch and school district. Conclusion Amount of language input is important as children begin to use a language, and amount of language output is important for adding knowledge to their language. Semantic development seemed to be driven more by input while morphosyntax development relied on both input and output. Clinicians who assess bilingual children should examine children’s language output in their second language to better understand their levels of performance. PMID:21731899
Iyiola Amos Damilare
Full Text Available Substitution is a phonological process in language. Existing studies have examined deletion in several languages and dialects with less attention paid to the spoken French of Ijebu Undergraduates. This article therefore examined substitution as a dominant phenomenon in the spoken French of thirty-four Ijebu Undergraduate French Learners (IUFLs in Selected Universities in South West of Nigeria with a view to establishing the dominance of substitution in the spoken French of IUFLs. The data collection was through tape-recording of participants’ production of 30 sentences containing both French vowel and consonant sounds. The results revealed inappropriate replacement of vowel and consonant in the medial and final positions in the spoken French of IUFLs.
Murphy, Richard C.
This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.
Mst. Moriam, Quadir
This study discusses motivation and strategy use of university students to learn spoken English in Bangladesh. A group of 355 (187 males and 168 females) university students participated in this investigation. To measure learners' degree of motivation a modified version of questionnaire used by Schmidt et al. (1996) was administered. Participants reported their strategy use on a modified version of SILL, the Strategy Inventory for Language Learning, version 7.0 (Oxford, 1990). In order to fin...
Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina
The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....
Gasquoine, Philip Gerard; Croyle, Kristin L; Cavazos-Gonzalez, Cynthia; Sandoval, Omar
This study compared the performance of Hispanic American bilingual adults on Spanish and English language versions of a neuropsychological test battery. Language achievement test scores were used to divide 36 bilingual, neurologically intact, Hispanic Americans from south Texas into Spanish-dominant, balanced, and English-dominant bilingual groups. They were administered the eight subtests of the Bateria Neuropsicologica and the Matrix Reasoning subtest of the WAIS-III in Spanish and English. Half the participants were tested in Spanish first. Balanced bilinguals showed no significant differences in test scores between Spanish and English language administrations. Spanish and/or English dominant bilinguals showed significant effects of language of administration on tests with higher language compared to visual perceptual weighting (Woodcock-Munoz Language Survey-Revised, Letter Fluency, Story Memory, and Stroop Color and Word Test). Scores on tests with higher visual-perceptual weighting (Matrix Reasoning, Figure Memory, Wisconsin Card Sorting Test, and Spatial Span), were not significantly affected by language of administration, nor were scores on the Spanish/California Verbal Learning Test, and Digit Span. A problem was encountered in comparing false positive rates in each language, as Spanish norms fell below English norms, resulting in a much higher false positive rate in English across all bilingual groupings. Use of a comparison standard (picture vocabulary score) reduced false positive rates in both languages, but the higher false positive rate in English persisted.
van Loon, E.; Pfau, R.; Steinbach, M.; Müller, C.; Cienki, A.; Fricke, E.; Ladewig, S.H.; McNeill, D.; Bressem, J.
Recent studies on grammaticalization in sign languages have shown that, for the most part, the grammaticalization paths identified in sign languages parallel those previously described for spoken languages. Hence, the general principles of grammaticalization do not depend on the modality of language
Zhang, Qingfang; Wang, Cheng
The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.
Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.
This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web. Readers are introduced to the problem of IE and its current challenges and limitations, supported with examples. The book discusses the need to fill the gap between documents, data, and people, and provides a broad overview of the technology supporting IE. The authors present a generic architecture for developing systems that are able to learn how to extract relevant information from natural language documents, and illustrate how to implement working systems using state-of-the-art and freely available software tools. The book also discusses concrete applications illustrating IE uses. · Provides an overview of state-of-the-art technology in information extraction (IE), discussing achievements and limitations for t...
Rodriguez, Mabel; Kratochvilova, Zuzana; Kuniss, Renata; Vorackova, Veronika; Dorazilova, Aneta; Fajnerova, Iveta
Bilingualism (BL) is increasing around the world. Although BL has been shown to have a broad impact-both positive and negative-on language and cognitive functioning, cognitive models and standards are mainly based on monolinguals. If we take cognitive performance of monolinguals as a standard, then the performance of bilinguals might not be accurately estimated. The assessment of cognitive functions is an important part of both the diagnostic process and further treatment in neurological and neuropsychiatric patients. In order to identify the presence or absence of cognitive deficit in bilingual patients, it will be important to determine the positive and/or negative impact of BL properties on measured cognitive performance. However, research of the impact of BL on cognitive performance in neuropsychiatric patients is limited. This article aims to compare the influence of the language (dominant-L1, second-L2) used for assessment of verbal cognitive performance in two cases of bilingual neuropsychiatric patients (English/Czech). Despite the fact that the two cases have different diagnoses, similarities in working memory and verbal learning profiles for L1 and L2 were present in both patients. We expected L1 to have higher performance in all measures when compared with L2. This assumption was partially confirmed. As expected, verbal working memory performance was better when assessed in L1. In contrast, verbal learning showed the same or better performance in L2 when compared with L1. Verbal fluency and immediate recall results were comparable in both languages. In conclusion, the language of administration partially influenced verbal performance of bilingual patients. Whether the language itself influenced low performance in a given language or it was a result of a deficit requires further research. According to our results, we suggest that an assessment in both languages needs to be a component of reasonable cognitive assessment of bilingual patients. © 2015 The
Davin, Kristin; Troyan, Francis J.; Donato, Richard; Hellman, Ashley
This article reports on the implementation of the Integrated Performance Assessment (IPA) in an Early Foreign Language Learning program. The goal of this research was to examine the performance of grade 4 and 5 students of Spanish on the IPA. Performance across the three communicative tasks is described and modifications to IPA procedures based on…
In this retrospective analysis of 140 third-year Psychology students, their academic performance was analysed in relation to their performance in the previous two years and, in particular, on a tutorial-based foundation programme in the first semester of their first-year. The results indicate that performance in third-year is not ...
Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice
These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.
Kyle Tran Myhre
Full Text Available In "Dust," spoken word poet Kyle "Guante" Tran Myhre crafts a multi-vocal exploration of the connections between the internment of Japanese Americans during World War II and the current struggles against xenophobia in general and Islamophobia specifically. Weaving together personal narrative, quotes from multiple voices, and "verse journalism" (a term coined by Gwendolyn Brooks, the poem seeks to bridge past and present in order to inform a more just future.
Full Text Available Technology Assisted Language Learning (TALL is an infallible means to develop profound knowledge and wide range of language skills. It instills in EFL learners an illimitable passion for task-based and skills oriented learning rather than rote memorization. New technological gadgets have commoditized a broad-based learning and teaching avenues and brought the whole learning process to life. A vast variety of authentic online- learning resources, motivational visual prompts, exciting videos, web-based interactivity and customizable language software, email, discussion forums, Skype, Twitter, apps, Internet mobiles, Facebook and YouTube have become obtrusive tools to enhance competence and performance in EFL teaching and learning realms. Technology can also provide various types of scaffolding for students learning to read. Nevertheless, instructors can also enhance their pedagogical effectiveness. However, the main focus of interest in this study is to ascertain to what extent the modern technological devices augment learners’ competence and performance specifically in vocabulary learning, grammatical accuracy and listening/ speaking skills. The remarkable scores of empirical surveys conducted in the present study reveal that TALL does assist learners to improve listening / speaking skills, pronunciation, extensive vocabulary and grammatical accuracy. The findings also manifest that the hybridity, instantaneity and super-diversity of digital learning lay far-reaching impact on learners' motivation for learning and incredibly maneuver learners to immerse in the whole learning process.
Hsiao, Tsung-Yuan; Chiang, Steve
This preliminary study examined the factor structure of the Beliefs About Language Learning Inventory in two samples of about 750 college students of English as a foreign language in Taiwan. Results of confirmatory factor analysis lend partial support to Horwitz's theoretical five-factor belief model. Subsequent exploratory and confirmatory factor analyses of data show that a four-factor model represented by only 12 items performed better than other models both theoretically and empirically. This model consists of two dimensions already theorized in the inventory: Difficulty of Language Learning and Foreign Language Aptitude, and two newly interpreted dimensions, Importance of Spoken Language and Analytical Approaches to Language Learning. Although this four-factor model could be replicated in an independent sample, the factors are not reliable, suggesting the need to search for a more representative set of beliefs to tap specific aspects of language learning.
Damen, Godelieve W J A; Langereis, Margreet C; Snik, Ad F M; Chute, Patricia M; Mylanus, Emmanuel A M
Investigation of the relation between classroom performance and language development of cochlear implant (CI) students in mainstream education. Structural analyses of assessment of mainstream performance (AMP) and Screening Instrument For Targeting Educational Risk (SIFTER) instruments. Cross-sectional instrument and language development analyses. Tertiary university medical center. Twenty-six CI children in elementary school with congenital or prelingual deafness were included. At the time of this study, mean period of multichannel CI use was 5.3 years, and children's ages ranged from 6.5 to 12.8 years. Assessment of mainstream performance and SIFTER instruments measured classroom performance and language development were measured by means of Reynell and Schlichting tests. Assessment of mainstream performance and SIFTER domains showed good reliability (Cronbach alpha >0.6), but factor analyses only showed the expected instrument structure in the AMP. In both questionnaires and within all domains, individual variability is detected. Spearman's correlation analyses showed the probable explanation of individual questionnaire variability by language test results (p value mostly mainstream education varies. Correlation analyses showed strong significant relation between questionnaire results (classroom performance) and both expressive and receptive language test results (Schlichting and Reynell tests). Structural questionnaire analyses of the AMP and SIFTER demonstrated good reliability. The predictive value of the AMP can monitor the actual linguistic functioning of the child.
Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM
This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…
Alimi, Modupe M.
Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…
Ahmed H. Yousef
Full Text Available This paper focuses on verifying the readiness, feasibility, generality and usefulness of multi-staging programming in software applications. We present a benchmark designed to evaluate the performance gain of different multi-staging programming (MSP languages implementations of object oriented languages. The benchmarks in this suite cover different tests that range from classic simple examples (like matrix algebra to advanced examples (like encryption and image processing. The benchmark is applied to compare the performance gain of two different MSP implementations (Mint and Metaphor that are built on object oriented languages (Java and C# respectively. The results concerning the application of this benchmark on these languages are presented and analysed. The measurement technique used in benchmarking leads to the development of a language independent performance enhancement framework that allows the programmer to select which code segments need staging. The framework also enables the programmer to verify the effectiveness of staging on the application performance. The framework is applied to a real case study. The case study results showed the effectiveness of the framework to achieve significant performance enhancement.
Williams, Colin H.
The Welsh language, which is indigenous to Wales, is one of six Celtic languages. It is spoken by 562,000 speakers, 19% of the population of Wales, according to the 2011 U.K. Census, and it is estimated that it is spoken by a further 200,000 residents elsewhere in the United Kingdom. No exact figures exist for the undoubted thousands of other…
Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka
Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.
Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process. This book examines how user models can be used to support such early evaluations in two ways: by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed. How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...
Full Text Available The mastery of speaking skills in English has become a major requisite in engineering industry. Engineers are expected to possess speaking skills for executing their routine activities and career prospects. The article focuses on the experimental study conducted to improve English spoken proficiency of Indian engineering students using task-based approach. Tasks are activities that concentrates on the learners in providing the main context and focus for learning. Therefore, a task facilitates the learners to use language rather than to learn it. This article further explores the pivotal role played by the pedagogical intervention in enabling the learners to improve their speaking skill in L2. The participants of the study chosen for control and experimental group were first year civil engineering students comprising 38 in each group respectively. The vital tool used in the study is oral communicative tasks administered to the experimental group. The oral communicative tasks enabled the students to think and generate sentences on their own orally. The‘t’ Test was computed to compare the performance of the students in control and experiment groups.The results of the statistical analysis revealed that there was a significant level of improvement in the oral proficiency of the experimental group.
Weber, B.; Wellmer, J.; Schur, S.; Dinkelacker, V.; Ruhlmann, J.; Mormann, F.; Axmacher, N.; Elger, C.E.; Fernandez, G.S.E.
PURPOSE: To determine whether language functional magnetic resonance imaging (fMRI) before epilepsy surgery can be similarly interpreted in patients with greatly different performance levels. METHODS: An fMRI paradigm using a semantic decision task with performance control and a perceptual control
Wagner, Robin M.; Huang, Jiunn C.
This paper explores the relative performances of Native English Speaking ("NES") students and English Second Language ("ESL") students in accounting courses at a large urban state university. Based upon a longitudinal study, we conclude that the relative performance between NES and ESL students depends upon the particular…
van den Berg, Freek; Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Turau, Volker; Kwiatkowska, Marta; Mangharam, Rahul; Weyer, Christoph
We propose iDSL, a domain specific language and toolbox for performance evaluation of Medical Imaging Systems. iDSL provides transformations to MoDeST models, which are in turn converted into UPPAAL and discrete-event MODES models. This enables automated performance evaluation by means of model
This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.
Allen, Melissa M; Ukrainetz, Teresa A; Carswell, Alisa L
This study investigated the narrative language performance of 3 types of readers who had been identified as being at risk through code-based response-to-intervention (RTI) procedures. In a retrospective group comparison, 32 at-risk 1st-grade readers were identified: children who resolved without intervention (early resolvers, n = 11), children who met criterion following 4 weeks of intervention (good responders, n = 8), and children who failed to meet criterion following 4 weeks of intervention (poor responders, n = 13). A narrative retell and a norm-referenced language test were obtained before intervention. There were no significant differences between the 3 learner types on the language test. However, the narratives of the good responders were significantly higher than the narratives of the other 2 groups on total number of words, number of different words, and number of communication units. The narratives of early resolvers and good responders differed significantly on the productivity index, number of coordinating conjunctions, and number of episodic elements. There were no other significant differences. Types of learners distinguished by a code-based RTI model showed differences in their narrative language. First graders who responded well to code-based reading intervention retold stories that contained more language and better story grammar than first graders who did not respond well to intervention. These results indicate the need to evaluate narrative language performance within RTI, especially for early resolvers.
Stoop, Ruedi; Nüesch, Patrick; Stoop, Ralph Lukas; Bunimovich, Leonid A
Using a symbolic dynamics and a surrogate data approach, we show that the language exhibited by common fruit flies Drosophila ('D.') during courtship is as grammatically complex as the most complex human-spoken modern languages. This finding emerges from the study of fifty high-speed courtship videos (generally of several minutes duration) that were visually frame-by-frame dissected into 37 fundamental behavioral elements. From the symbolic dynamics of these elements, the courtship-generating language was determined with extreme confidence (significance level > 0.95). The languages categorization in terms of position in Chomsky's hierarchical language classification allows to compare Drosophila's body language not only with computer's compiler languages, but also with human-spoken languages. Drosophila's body language emerges to be at least as powerful as the languages spoken by humans.
A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…
Workshop at Arden House, February 23-26,1992. Francis Kubala , et al, "BBN BYBLOS and HARC February 1992 ATIS Benchmark Results", 5th DARPA Speech...8217, presented at ICASSP, 1992. Richard Schwartz, Steve Austin, Francis Kubala , John Makhoul, Long Nguyen, Paul Placeway; George Zavaliagkos, Northeastern...of the DARPA Common Lexicon Working Group at the 5th DARPA Speech & NL Workshop at Arden House, February 23-26,1992. Francis Kubala is chairing the
stutters , false starts, repairs, hesitations, filled pauses, and various other non-lexical acoustic events. Under these circumstances, it is not...sensible choice from a software engineering perspective. The case for separating out various task-independent aspects of the conversation has in fact been...in behav- ior both within and across systems. It also represents a more sensible solution from a software engi- The RavenClaw error handling
Gloria Avendaño de Barón
Full Text Available This article presents the results of a research project whose aims were the following: to determine the frequency of the use of pronoun forms in polite treatment sumercé, usted and tú, according to differences in gender, age and level of education, among speakers in Tunja; to describe the sociodiscursive variations and to explain the relationship between usage and courtesy. The methodology of the Project for the Sociolinguistic Study of Spanish in Spain and in Latin America (PRESEEA was used, and a sample of 54 speakers was taken. The results indicate that the most frequently used pronoun in Tunja to express friendliness and affection is sumercé, followed by usted and tú; women and men of different generations and levels of education alternate the use of these three forms in the context of narrative, descriptive, argumentative and explanatory speech.
André, Elisabeth; Rehm, Matthias; Minker, Wolfgang
While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....
Mann, Stephen L
Building on Barrett (1998), this study provides a sociolinguistic analysis of the language used by Suzanne, a European-American drag queen, during her on-stage performance in the southeastern United States. Suzanne uses wigs and costumes to portray a female character on stage, but never hides the fact that she is biologically male. She is also a member of a predominantly African-American cast. Through her creative use of linguistic features such as stylemixing (i.e., the use of linguistic features shared across multiple language varieties) and expletives, Suzanne is able to perform an identity that frequently blurs gender and racial lines.
The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f
Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo
This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...
Lederberg, Amy R; Schick, Brenda; Spencer, Patricia E
Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to early identification/intervention, advanced technologies (e.g., cochlear implants), and perceptually accessible language models. DHH children develop sign language in a similar manner as hearing children develop spoken language, provided they are in a language-rich environment. This occurs naturally for DHH children of deaf parents, who constitute 5% of the deaf population. For DHH children of hearing parents, sign language development depends on the age that they are exposed to a perceptually accessible 1st language as well as the richness of input. Most DHH children are born to hearing families who have spoken language as a goal, and such development is now feasible for many children. Some DHH children develop spoken language in bilingual (sign-spoken language) contexts. For the majority of DHH children, spoken language development occurs in either auditory-only contexts or with sign supports. Although developmental trajectories of DHH children with hearing parents have improved with early identification and appropriate interventions, the majority of children are still delayed compared with hearing children. These DHH children show particular weaknesses in the development of grammar. Language deficits and differences have cascading effects in language-related areas of development, such as theory of mind and literacy development.
Lauring, Jakob; Paunova, Minna; Butler, Christina Lea
Multicultural teams are increasingly employed by organizations as a way to achieve international coordination, particularly when creativity and innovation are desired. Unfortunately, studies often fail to demonstrate the purported benefits associated with these teams, reporting difficulties...... with communication and social integration, inhibiting creativity and performance. A survey-based study of multicultural academic teams (n = 1085) demonstrates that teams that are open to language diversity are more creative and perform better. We observe that performance is enhanced even further when teams are also...
Ramos, Teresita V.; de Guzman, Videa
This language textbook is designed for beginning students of Tagalog, the principal language spoken on the island of Luzon in the Philippines. The introduction discusses the history of Tagalog and certain features of the language. An explanation of the text is given, along with notes for the teacher. The text itself is divided into nine sections:…
domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...Approaches for Language Identification in Mismatched Environments Shahan Nercessian, Pedro Torres-Carrasquillo, and Gabriel Martínez-Montes...consider the task of language identification in the context of mismatch conditions. Specifically, we address the issue of using unlabeled data in the
There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.
Weber, Bernd; Wellmer, Jörg; Schür, Simone; Dinkelacker, Vera; Ruhlmann, Jürgen; Mormann, Florian; Axmacher, Nikolai; Elger, Christian E; Fernández, Guillén
To determine whether language functional magnetic resonance imaging (fMRI) before epilepsy surgery can be similarly interpreted in patients with greatly different performance levels. An fMRI paradigm using a semantic decision task with performance control and a perceptual control task was applied to 226 consecutive patients with drug-resistant localization-related epilepsy during their presurgical evaluations. The volume of activation and lateralization in an inferior frontal and a temporoparietal area was assessed in correlation with individual performance levels. We observed differential effects of task performance on the volume of activation in the inferior frontal and the temporoparietal region of interest, but performance measures did not correlate with the lateralization of activation. fMRI, as applied here, in patients with a wide range of cognitive abilities, can be interpreted regarding language lateralization in a similar way.
Vinck, Anja; Verhagen, Mijke M M; Gerven, Marjo van; de Groot, Imelda J M; Weemaes, Corry M R; Maassen, Ben A M; Willemsen, Michel A A P
To describe cognitive and speech-language functioning of patients with ataxia-telangiectasia (A-T) in relation to their deteriorating (oculo)motor function. Observational case series. Cognitive functioning, language, speech and oral-motor functioning were examined in eight individuals with A-T (six boys, two girls), taking into account the confounding effects of motor functioning on test performance. All patients, except the youngest one, suffered from mild-to-moderate/severe intellectual impairment. Compared to developmental age, patients showed cognitive deficits in attention, (non)verbal memory and verbal fluency. Furthermore, dysarthria and weak oral-motor performance was found. Language was one of the patients' assets. In contrast to the severe deterioration of motor functioning in A-T, cognitive and language functioning appeared to level off with a typical profile of neuropsychological strengths and weaknesses. Based on our experiences with A-T, suggestions are made to determine a valid assessment of the cognitive and speech-language manifestations.
Lobel, Jason William; Paputungan, Ade Tatak
This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…
This paper looks at the degree and way in which lesser-used languages are used as expressions of identity, focusing specifically on two of Europe's lesser-used languages. The first is Irish, spoken in the Republic of Ireland and the second is Galician, spoken in the Autonomous Community of Galicia in the North-western part of Spain. The paper…
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.
Nip, Ignatius S B; Blumenfeld, Henrike K
Second-language (L2) production requires greater cognitive resources to inhibit the native language and to retrieve less robust lexical representations. The current investigation identifies how proficiency and linguistic complexity, specifically syntactic and lexical factors, influence speech motor control and performance. Speech movements of 29 native English speakers with low or high proficiency in Spanish were recorded while producing simple and syntactically complex sentences in English and Spanish. Sentences were loaded with cognate (e.g., baby-bebé) or noncognate (e.g., dog-perro) words. Effects of proficiency, lexicality (cognate vs. noncognate), and syntactic complexity on maximum speed, range of movement, duration, and speech movement variability were examined. In general, speakers with lower L2 proficiency differed in their speech motor control and performance from speakers with higher L2 proficiency. Speakers with higher L2 proficiency generally had less speech movement variability, shorter phrase durations, greater maximum speeds, and greater ranges of movement. In addition, lexicality and syntactic complexity affected speech motor control and performance. L2 proficiency, lexicality, and syntactic complexity influence speech motor control and performance in adult L2 learners. Information about relationships between speech motor control, language proficiency, and cognitive-linguistic demands may be used to assess and treat bilingual clients and language learners.
Albus, Debra; Thurlow, Martha; Liu, Kristin; Bielinski, John
The authors examined the effects of a simplified English dictionary accommodation on the reading-test performance of Hmong English-language learners (ELLs). Participants included a control group of 69 non-ELL students and an experimental group of 133 Hmong ELLs from 3 urban middle schools in Minnesota. In a randomized counterbalanced design, all…
Musa, Alice K. J.; Nwachukwu, Kelechukwu I.; Ali, Domiya Geoffrey
The study determined Relationship between Students' Expectancy Beliefs and English Language Performance of Students in Maiduguri Metropolis, Borno State, Nigeria. Correlation design was adopted for the study. Four hypotheses which determined the relationships between the components of expectancy beliefs: ability, tasks difficulty, and past…
This study examined the relations between 8-12-year-olds' perceived attachment security to father, academic self-concept and school performance in language mastery. One hundred and twenty two French students' perceptions of attachment to mother and to father were explored with the Security Scale and their academic self-concept was assessed with…
Johnson, Benny G.; Sargent, Carol Springer
This study investigated how three factors impacted performance on cost-volume-profit homework problems: language, formula use, and instruction. Students enrolled in Introduction to Financial Accounting (the first principles of accounting course) and Managerial Accounting (the second principles of accounting course) from eight different US colleges…
Kormos, Judit; Préfontaine, Yvonne
The present mixed-methods study examined the role of learner appraisals of speech tasks in second language (L2) French fluency. Forty adult learners in a Canadian immersion program participated in the study that compared four sources of data: (1) objectively measured utterance fluency in participants' performances of three narrative tasks…
Dekker, Diane; Young, Catherine
There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…
Roy-Campbell, Zaline M.
English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…
Analysis of English language learner performance on the biology Massachusetts comprehensive assessment system: The impact of english proficiency, first language characteristics, and late-entry ELL status
Mitchell, Mary A.
This study analyzed English language learner (ELL) performance on the June 2012 Biology MCAS, namely on item attributes of domain, cognitive skill, and linguistic complexity. It examined the impact of English proficiency, Latinate first language, first language orthography, and late-entry ELL status. The results indicated that English proficiency was a strong predictor of performance and that ELLs at higher levels of English proficiency overwhelmingly passed. The results further indicated that English proficiency introduced a construct-irrelevant variance on the Biology MCAS and raised validity issues for using this assessment at lower levels of English proficiency. This study also found that ELLs with a Latinate first language consistently had statistically significant lower performance. Late-entry ELL status did not predict Biology MCAS performance.
This work contains the first comprehensive description of Abui, a language of the Trans New Guinea family spoken approximately by 16,000 speakers in the central part of the Alor Island in Eastern Indonesia. The description focuses on the northern dialect of Abui as spoken in the village
Phelan, P.F.; Keddy, C.; Beugelsdojk, T.J.
Several robotic systems have been developed by Los Alamos National Laboratory to handle radioactive material. Because of safety considerations, the robotic system must be under direct human supervision and interactive control continuously. In this paper, we describe the implementation of a voice-recognition system that permits this control, yet allows the robot to perform complex preprogrammed manipulations without the operator's intervention. To provide better interactive control, we connected to the robot's control computer, a speech synthesis unit, which provides audible feedback to the operator. Thus upon completion of a task or if an emergency arises, an appropriate spoken message can be reported by the control computer. The training programming and operation of this commercially available system are discussed, as are the practical problems encountered during operations
This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,
Carter, Ronald; McCarthy, Michael
This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…
Anatoliy V. Kharkhurin
Full Text Available This is the first attempt of empirical investigation of language mediated concept activation (LMCA in bilingual memory as a cognitive mechanism facilitating divergent thinking. Russian–English bilingual and Russian monolingual college students were tested on a battery of tests including among others Abbreviated Torrance Tests for Adults assessing divergent thinking traits and translingual priming (TLP test assessing the LMCA. The latter was designed as a lexical decision priming test, in which a prime and a target were not related in Russian (language of testing, but were related through their translation equivalents in English (spoken only by bilinguals. Bilinguals outperformed their monolingual counterparts on divergent thinking trait of cognitive flexibility, and bilinguals’ performance on this trait could be explained by their TLP effect. Age of second language acquisition and proficiency in this language were found to relate to the TLP effect, and therefore were proposed to influence the directionality and strength of connections in bilingual memory.
Full Text Available Many principals or heads of English departments usually use supervising checklists to monitor or evaluate their teachers' performance. As a matter of fact, teachers may not feel satisfied with the feedback they have got from their superiors. This paper aims at inspiring them with ideas of self-learning to improve their own teaching performance for professional development. In this paper, the writer would like to share his own experience as a principal and a head of the English department by exploring self-evaluation models to monitor language teachers' performance in the classroom. For this purpose, it is necessary to identify the needs of language teachers and later this teacher portfolio may also help principals or head of the department evaluate their teachers' performance.
Science as a second language: Analysis of Emergent Bilinguals performance and the impact of English language proficiency and first language characteristics on the Colorado measures of academic success for science
Bruno, Joanna K.
In an age when communication is highly important and states across the nation, including Colorado, have adopted Common Core State Standards, the need for academic language is even more important than ever. The language of science has been compared to a second language in that it uses specific discourse patterns, semantic rules, and a very specific vocabulary. There is a need for educators to better understand how language impacts academic achievement, specifically concerning Emergent Bilinguals (EBs). Research has identified the need to study the role language plays in content assessments and the impact they have on EBs performance (Abedi, 2008b; Abedi, Hofestter & Lord, 2004; Abedi & Lord, 2001). Since language is the means through which content knowledge is assessed, it is important to analyze this aspect of learning. A review of literature identified the need to create more reliable and valid content assessments for EBs (Abedi, 2008b) and to further study the impact of English proficiency on EBs performance on standardized assessments (Solorzano, 2008; Wolf, & Leon, 2009). This study contributes to the literature by analyzing EBs performance on a state-level science content assessment, taking into consideration English language proficiency, receptive versus productive elements of language, and students' home language. This study further contributes by discussing the relationship between language proficiency, and the different strands of science (physical, life, and earth) on the state science assessment. Finally, this study demonstrates that home language, English language proficiency, and receptive and productive elements of language are predictive of EBs' achievement on the CMAS for science, overall and by strand. It is the blending of the social (listening and speaking) with the academic (reading and writing) that is also important and possibly more important.
Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy
Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.
de Groot, A.M.B.; Filipović, L.; Pütz, M.
The linguistic expressions of the majority of bilinguals exhibit deviations from the corresponding expressions of monolinguals in phonology, grammar, and semantics, and in both languages. In addition, bilinguals may process spoken and written language differently from monolinguals. Two possible
Pfau, R.; Steinbach, M.
Studies on sign language grammaticalization have demonstrated that most of the attested diachronic changes from lexical to functional element parallel those previously described for spoken languages. To date, most of these studies are either descriptive in nature or embedded within
Full Text Available The primary condition for successful in second or foreign language learning is providing an adequate environment. It is as a medium of increasing the students’ language exposure in order to be able to success in acquiring second or foreign language profciency. This study was designed to propose the adequate English language input that can decrease the students’ anxiety in reading comprehension performance. Of the four skills, somehow reading can be regarded as especially important because reading is assumed to be the central means for learning new information. Some students, however, still encounter many problems in reading. It is because of their anxiety when they are reading. Providing and creating an interesting-contextual reading material and gratifed teachers can make out this problem which occurs mostly in Indonesian’s classrooms. It revealed that the younger learners of English there do not received adequate amount of the target language input in their learning of English. Hence, it suggested the adoption of extensive reading programs as the most effective means in the creation of an input-rich environment in EFL learning contexts. Besides they also give suggestion to book writers and publisher to provide myriad books that appropriate and readable for their students.
Juan Manuel Montero
Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.
Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel
We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.
Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina
This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…
Pfau, R.; Steinbach, M.; Herrmann, A.
Since natural languages exist in two different modalities - the visual-gestural modality of sign languages and the auditory-oral modality of spoken languages - it is obvious that all fields of research in modern linguistics will benefit from research on sign languages. Although previous studies have
Poetry in a sign language can make use of literary devices just as poetry in a spoken language can. The study of literary expression in sign languages has increased over the last twenty years and for South African Sign Language (SASL) such literary texts have also become more available. This article gives a brief overview ...
Language policy and speech practice in Cape Town: An exploratory public health sector study. Michellene Williams, Simon Bekker. Abstract. Public language policy in South Africa recognises 11 official spoken languages. In Cape Town, and in the Western Cape, three of these eleven languages have been selected for ...
Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale
Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.
In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...
Larsen, Lars Bo
This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...... model roughly explains 50% of the observed variance in the user satisfaction based on measures of task success and speech recognition accuracy, a result similar to those obtained at AT&T. The applied methods are discussed and evaluated critically....
Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.
Full Text Available Objectives: Self-assessment, as one type of alternative assessment, with the increased attention to learner-centered curricula, needs analysis, and learner autonomy has gained popularity in recent years. The aim of this study was to investigate the effect of self-assessment on Javanroodian Foreign Language (Kordestan Learners’ Oral Performance ability. Methods: The assessment program involved training, practice, videotaping, feedback, assessment and discussion. Twenty English as a foreign language students of foreign language institutes in Javanrood participated in the study. They were divided into experimental and control group, based on the results of English oral performance pre-tests. The research instrument consisted of a self- assessment checklist containing subcategories related to the organization of the presentation, content, linguistic factors (vocabulary use, grammatical rules and pronunciation and interaction with the audience. It was developed as a result of interviewing participants and their teachers and then adapting results based on the results of reviewing available checklists in the literature. The data was collected by the experimental group members' self-assessments of their 6 oral performances and the teacher's assessment of their performances. Results: The obtained data was analyzed using descriptive and inferential methods.Results indicated that participating in self- assessment process had positive effect on learners' oral performance ability. Discussion: Results will have implications for policy makers, material designers and developers, teachers and learners. It will also open up the doors of introducing new trends in assessment to teachers and learners.
Corneli, Joseph; Corneli, Miriam
"Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...
Marites Piguing HILAO
Full Text Available Mobile phone technology that has a huge impact on students’ lives in the digital age may offer a new type of learning. The use of effective tool to support learning can be affected by the factor of gender. The current research compared how male and female students perceived mobile phones as a language learning tool, used mobile phones to learn English and developed their learning performance. A five-point rating scale questionnaire was used to collect data from 122 students, comprising 65 females and 57 males. They were enrolled in a fundamental English course where mobile phone usage was integrated in certain language learning tasks with an aim to facilitate learning. The findings demonstrated that male and female students did not differ in their usage, attitudes toward mobile phone uses for language learning as well as their learning performance at a significance level. In addition, the constraints of using mobile phone for learning that students identified in an open-ended question included the small screen and keyboard the most, followed by intrusiveness of SMS background knowledge, and limited memory of mobile phone. The implication for classroom practice was proposed in how mobile phone can be fully incorporated into the instructional process in order to enhance learner engagement. The results of this study are important for teachers when implementing the mobile phone technology in language teaching. They can be used as a guideline of how mobile phone can be fully incorporated into the instructional process in order to enhance learner engagement.
Lewis, Kandia; Sandilos, Lia E.; Hammer, Carol Scheffner; Sawyer, Brook E.; Méndez, Lucía I.
Research Findings This study explored the relations between Spanish–English dual language learner (DLL) children's home language and literacy experiences and their expressive vocabulary and oral comprehension abilities in Spanish and in English. Data from Spanish–English mothers of 93 preschool-age Head Start children who resided in central Pennsylvania were analyzed. Children completed the Picture Vocabulary and Oral Comprehension subtests of the Batería III Woodcock–Muñoz and the Woodcock–Johnson III Tests of Achievement. Results revealed that the language spoken by mothers and children and the frequency of mother–child reading at home influenced children's Spanish language abilities. In addition, the frequency with which children told a story was positively related to children's performance on English oral language measures. Practice or Policy The findings suggest that language and literacy experiences at home have a differential impact on DLLs' language abilities in their 2 languages. Specific components of the home environment that benefit and support DLL children's language abilities are discussed. PMID:27429533
Nicastri, Maria; Filipo, Roberto; Ruoppolo, Giovanni; Viccaro, Marika; Dincer, Hilal; Guerzoni, Letizia; Cuda, Domenico; Bosco, Ersilia; Prosperini, Luca; Mancini, Patrizia
To assess skills in inferences during conversations and in metaphors comprehension of unilaterally cochlear implanted children with adequate abilities at the formal language tests, comparing them with well-matched hearing peers; to verify the influence of age of implantation on overall skills. The study was designed as a matched case-control study. 31 deaf children, unilateral cochlear implant users, with normal linguistic competence at formal language tests were compared with 31 normal hearing matched peers. Inferences and metaphor comprehension skills were assessed through the Implicit Meaning Comprehension, Situations and Metaphors subtests of the Italian Standardized Battery of "Pragmatic Language Skills MEDEA". Differences between patient and control groups were tested by the Mann-Whitney U test. Correlations between age at implantation and time of implant use with each subtest were investigated by the Spearman rank correlation coefficient. No significant differences between the two groups were found in inferencing skills (p=0.24 and p=0.011 respectively for Situations and Implicit Meaning Comprehension). Regarding figurative language, unilaterally cochlear implanted children performed significantly below their normal hearing peers in Verbal Metaphor comprehension (p=0.001). Performances were related to age at implantation, but not with time of implant use. Unilaterally cochlear implanted children with normal language level showed responses similar to NH children in discourse inferences, but not in figurative language comprehension. Metaphors still remains a challenge for unilateral implant users and above all when they have not any reference, as demonstrated by the significant difference in verbal rather than figurative metaphors comprehension. Older age at implantation was related to worse performance for all items. These aspects, until now less investigated, had to receive more attention to deeply understand specific mechanisms involved and possible effects
Lee, Hom-Yi; Chen, Rou-An; Lin, Yu-Shiuan; Yang, Yu-Chi; Huang, Chiung-Wei; Chen, Sz-Chi
Poor writing is common in children with Attention Deficit Hyperactivity Disorder (ADHD). However, the writing performance of children with ADHD has been rarely formally explored in Taiwan, so the purpose of this study was to investigate writing features of children with ADHD in Taiwan. There were 25 children with ADHD and 25 normal children involved in a standardization writing assessment - Written Language Test for Children, to assess their performance at the dictation, sentence combination, adding/deducting redical, cloze and sentence making subtests. The results showed that except for the score of the sentence combining subtest, the score of children with ADHD was lower than the normal student in the rest of the subtests. Almost 60% of ADHD children's scores were below the 25th percentile numbers, but only 20% for normal children. Thus, writing problems were common for children with ADHD in Taiwan, too. First, children with ADHD performed worse than normal children on the dictation and cloze subtests, showing the weaker abilities of retrieving correct characters from their mental lexicon. Second, children with ADHD performed worse on the adding/deducting redical subtest than normal children did. Finally, at the language level, the score of children with ADHD on the sentence combination subtest was not lower than normal children, implicating their normal grammatic competence. It is worth mentioning that Taiwanese children with ADHD ignore the details of characters when they are writing, a finding that is common across languages. Copyright © 2014 Elsevier Ltd. All rights reserved.
Segalowitz, Norman; de Almeida, Roberto G
It is well known that bilinguals perform better in their first language (L1) than in their second lanaguage (L2) in a wide range of linguistic tasks. In recent studies, however, the authors have found that bilingual participants can demonstrate faster response times to L1 stimuli than to L2 stimuli in one classification task and the reverse in a different classification task. In the current study, they investigated the reasons for this "L2-better-than-L1" effect. English-French bilinguals performed one word relatedness and two categorization tasks with verbs of motion (e.g., run) and psychological verbs (e.g., admire) in both languages. In the word relatedness task, participants judged how closely related pairs of verbs from both categories were. In a speeded semantic categorization task, participants classified the verbs according to their semantic category (psychological or motion). In an arbitrary classification task, participants had to learn how verbs had been assigned to two arbitrary categories. Participants performed better in L1 in the semantic classification task but paradoxically better in L2 in the arbitrary classification task. To account for these effects, the authors used the ratings from the word relatedness task to plot three-dimensional "semantic fields" for the verbs. Cross-language field differences were found to be significantly related to the paradoxical performance and to fluency levels. The results have implications for understanding of how bilinguals represent verbs in the mental lexicon. Copyright 2002 Elsevier Science (USA).
Full Text Available The article gives new evidence about the adverb as a part of the grammatical system of the Ukrainian steppe dialect spread in the area between the Danube and the Dniester rivers. The author proves that the grammatical system of the dialect spoken in the v. Shevchenkove, Kiliya district, Odessa region is determined by the historical development of the Ukrainian language rather than the influence of neighboring dialects.
Van Rinsveld, Amandine; Schiltz, Christine; Landerl, Karin; Brunner, Martin; Ugen, Sonja
Differences between languages in terms of number naming systems may lead to performance differences in number processing. The current study focused on differences concerning the order of decades and units in two-digit number words (i.e., unit-decade order in German but decade-unit order in French) and how they affect number magnitude judgments. Participants performed basic numerical tasks, namely two-digit number magnitude judgments, and we used the compatibility effect (Nuerk et al. in Cognition 82(1):B25-B33, 2001) as a hallmark of language influence on numbers. In the first part we aimed to understand the influence of language on compatibility effects in adults coming from German or French monolingual and German-French bilingual groups (Experiment 1). The second part examined how this language influence develops at different stages of language acquisition in individuals with increasing bilingual proficiency (Experiment 2). Language systematically influenced magnitude judgments such that: (a) The spoken language(s) modulated magnitude judgments presented as Arabic digits, and (b) bilinguals' progressive language mastery impacted magnitude judgments presented as number words. Taken together, the current results suggest that the order of decades and units in verbal numbers may qualitatively influence magnitude judgments in bilinguals and monolinguals, providing new insights into how number processing can be influenced by language(s).
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Whyte, Elisabeth M; Nelson, Keith E
Children with autism spectrum disorder (ASD) often have difficulties with understanding pragmatic language and also nonliteral language. However, little is understood about the development of these two language domains. The current study examines pragmatic and nonliteral language development in 69 typically developing (TD) children and 27 children with ASD, ages 5-12 years. For both groups, performance on pragmatic language and nonliteral language scores on the Comprehensive Assessment of Spoken Language increased significantly with chronological age, vocabulary, syntax, and theory of mind abilities both for children with ASD and TD children. Based on a cross-sectional trajectory analysis, the children with ASD showed slower rates of development with chronological age relative to TD children for both the pragmatic language and nonliteral language subtests. However, the groups did not show significant differences in the rate of development for either pragmatic language or nonliteral language abilities with regard to their vocabulary abilities or TOM abilities. It appears that children with ASD may reach levels of pragmatic language that are in line with their current levels of basic language abilities. Both basic language abilities and theory of mind abilities may aid in the development of pragmatic language and nonliteral language abilities. After reading this article, the reader will understand: (1) the relation between basic language abilities (vocabulary and syntax) and advanced language abilities (pragmatic and nonliteral language), (2) how the cross-sectional trajectory analysis differs from traditional group matching studies, and (3) how pragmatic and nonliteral language development for children with autism shows both similarities and differences compared to typically developing children. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available It is unclear whether healthy aging influences concreteness effects (ie. the processing advantage seen for concrete over abstract words and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete versus abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing.
Lauren B. Collister
Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.
Van Heerden, C
Full Text Available speech recognisers for a diverse multitude of languages. The paper investigates the feasibility of developing small-vocabulary speaker-independent ASR systems designed for use in a telephone-based information system, using ten resource-scarce languages...
Lindau, Tâmara Andrade; Rossi, Natalia Freitas; Giacheti, Celia Maria
The objective was to test whether the Brazilian version of the Preschool Language Assessment Instrument - Second Edition (PLAI-2) has the potential to assess and identify differences in typical language development of Portuguese-speaking preschoolers. The study included 354 children of both genders with typical language development who were between the ages of 3 years and 5 years 11 months. The version of the PLAI-2 previously translated into Brazilian Portuguese was used to assess the communication skills of these preschool-age children. Statistically significant differences were found between the age groups, and the raw score tended to increase as a function of age. With nonstandardized assessments, the performances of the younger groups revealed behavioral profiles (e.g., nonresponsive, impulsive behavior) that directly influenced the evaluation. The findings of this study show that the PLAI-2 is effective in identifying differences in language development among Brazilian children of preschool age. Future research should include studies validating and standardizing these findings. © 2016 S. Karger AG, Basel.
Lillo-Martin, Diane C; Gajewski, Jon
Linguistic research has identified abstract properties that seem to be shared by all languages-such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language-in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression. WIREs Cogn Sci 2014, 5:387-401. doi: 10.1002/wcs.1297 This article is categorized under: Linguistics > Linguistic Theory. © 2014 The Authors. WIREs Cognitive Science published by John Wiley & Sons, Ltd.
Lu, Zhongshe; Liu, Meihua
The present study explored the interrelations between foreign language (FL) reading anxiety, FL reading strategy use and their interactive effect on FL reading comprehension performance at the tertiary level in China. Analyses of the survey data collected from 1702 university students yielded the following results: (a) Both Foreign Language Reading Anxiety Scale (FLRAS) and Foreign Language Reading Strategy Use Scale (FLRSUS) had important subcomponents, (b) more than half of the students gen...
Freed, Jenny; Adams, Catherine; Lockton, Elaine
Children who have pragmatic language impairment (CwPLI) have difficulties with the use of language in social contexts and show impairments in above-sentence level language tasks. Previous studies have found that typically developing children's reading comprehension (RC) is predicted by reading accuracy and spoken sentence level comprehension (SLC). This study explores the predictive ability of these factors and above-sentence level comprehension (ASLC) on RC skills in a group of CwPLI. Sixty nine primary school-aged CwPLI completed a measure of RC along with measures of reading accuracy, spoken SLC and both visual (pictorially presented) and spoken ASLC tasks. Regression analyses showed that reading accuracy was the strongest predictor of RC. Visual ASLC did not explain unique variance in RC on top of spoken SLC. In contrast, a measure of spoken ASLC explained unique variance in RC, independent from that explained by spoken SLC. A regression model with nonverbal intelligence, reading accuracy, spoken SLC and spoken ASLC as predictors explained 74.2% of the variance in RC. Findings suggest that spoken ASLC may measure additional factors that are important for RC success in CwPLI and should be included in routine assessments for language and literacy learning in this group. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon
Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282
Caselli, Naomi K; Pyers, Jennie E
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Pallier, C; Dehaene, S; Poline, J-B; LeBihan, D; Argenti, A-M; Dupoux, E; Mehler, J
Do the neural circuits that subserve language acquisition lose plasticity as they become tuned to the maternal language? We tested adult subjects born in Korea and adopted by French families in childhood; they have become fluent in their second language and report no conscious recollection of their native language. In behavioral tests assessing their memory for Korean, we found that they do not perform better than a control group of native French subjects who have never been exposed to Korean. We also used event-related functional magnetic resonance imaging to monitor cortical activations while the Korean adoptees and native French listened to sentences spoken in Korean, French and other, unknown, foreign languages. The adopted subjects did not show any specific activations to Korean stimuli relative to unknown languages. The areas activated more by French stimuli than by foreign stimuli were similar in the Korean adoptees and in the French native subjects, but with relatively larger extents of activation in the latter group. We discuss these data in light of the critical period hypothesis for language acquisition.
Allendorfer, Jane B; Lindsell, Christopher J; Siegel, Miriam; Banks, Christi L; Vannest, Jennifer; Holland, Scott K; Szaflarski, Jerzy P
To test the existence of sex differences in cortical activation during verb generation when performance is controlled for. Twenty male and 20 female healthy adults underwent functional magnetic resonance imaging (fMRI) using a covert block-design verb generation task (BD-VGT) and its event-related version (ER-VGT) that allowed for intra-scanner recordings of overt responses. Task-specific activations were determined using the following contrasts: BD-VGT covert generation>finger-tapping; ER-VGT overt generation>repetition; ER-VGT overt>covert generation. Lateral cortical regions activated during each contrast were used for calculating language lateralization index scores. Voxelwise regressions were used to determine sex differences in activation, with and without controlling for performance. Each brain region showing male/female activation differences for ER-VGT overt generation>repetition (isolating noun-verb association) was defined as a region of interest (ROI). For each subject, the signal change in each ROI was extracted, and the association between ER-VGT activation related to noun-verb association and performance was assessed separately for each sex. Males and females performed similarly on language assessments, had similar patterns of language lateralization, and exhibited similar activation patterns for each fMRI task contrast. Regression analysis controlling for overt intra-scanner performance either abolished (BD-VGT) or reduced (ER-VGT) the observed differences in activation between sexes. The main difference between sexes occurred during ER-VGT processing of noun-verb associations, where males showed greater activation than females in the right middle/superior frontal gyrus (MFG/SFG) and the right caudate/anterior cingulate gyrus (aCG) after controlling for performance. Better verb generation performance was associated with increased right caudate/aCG activation in males and with increased right MFG/SFG activation in females. Males and females exhibit
Ingvalson, Erin M.; Wong, Patrick C. M.
Cochlear implants (CI) have brought with them hearing ability for many prelingually deafened children. Advances in CI technology have brought not only hearing ability but speech perception to these same children. Concurrent with the development of speech perception has come spoken language development, and one goal now is that prelingually deafened CI recipient children will develop spoken language capabilities on par with those of normal hearing (NH) children. This goal has not been met pure...
Johannessen, Janne Bondi; Salmons, Joseph C.; Westergaard, Marit; Anderssen, Merete; Arnbjörnsdóttir, Birna; Allen, Brent; Pierce, Marc; Boas, Hans C.; Roesch, Karen; Brown, Joshua R.; Putnam, Michael; Åfarli, Tor A.; Newman, Zelda Kahan; Annear, Lucas; Speth, Kristin
This book presents new empirical findings about Germanic heritage varieties spoken in North America: Dutch, German, Pennsylvania Dutch, Icelandic, Norwegian, Swedish, West Frisian and Yiddish, and varieties of English spoken both by heritage speakers and in communities after language shift. The volume focuses on three critical issues underlying the notion of ‘heritage language’: acquisition, attrition and change. The book offers theoretically-informed discussions of heritage language processe...
This article addresses validity and fairness in the testing of English language learners (ELLs)--students in the United States who are developing English as a second language. It discusses limitations of current approaches to examining the linguistic features of items and their effect on the performance of ELL students. The article submits that…
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Lamônica, Dionísia Aparecida Cusin; Silva-Mori, Mariana Jales Felix da; Ribeiro, Camila da Costa; Maximino, Luciana Paula
To compare the performance in the abilities of receptive and expressive language of children with cleft lip and palate with that of children without cleft lip and palate with typical 12 to 36-month chronological development. The sample consisted of 60 children aged 12 and 36 months: 30 with cleft lip and palate diagnosis and 30 without cleft lip and palate diagnosis with typical development. The groups were paired according to gender, age (in months), and socioeconomic level. The procedures consisted of analysis of medical records, anamnesis with family members, and valuation of the Early Language Milestone Scale (ELMS). The chart analysis showed 63.34% of the children with unilateral cleft lip and palate, 16.66% with bilateral incisive transforamen cleft, and 20% with post-foramen cleft. Children with cleft lip and palate underwent surgeries (lip repair and/or palatoplasty) at the recommended ages and participated in early intervention programs; 40% presented recurrent otitis history, and 50% attended schools. Statistical analysis included the use of the Mann Whitney test with significance level of p cleft lip and palate showed statistically significant low performance in receptive and expressive language compared with children without cleft lip and palate.
Ngo, Mary Kim; Vu, Kim-Phuong L; Strybel, Thomas Z
We examined the interaction between music and tone language experience as related to relative pitch processing by having participants judge the direction and magnitude of pitch changes in a relative pitch task. Participants' performance on this relative pitch task was assessed using the Cochran-Weiss-Shanteau (CWS) index of expertise, based on a ratio of discrimination over consistency in participants' relative pitch judgments. Testing took place in 2 separate sessions on different days to assess the effects of practice on participants' performance. Participants also completed the Montreal Battery of Evaluation of Amusia (MBEA), an existing measure comprising subtests aimed at evaluating relative pitch processing abilities. Musicians outperformed nonmusicians on both the relative pitch task, as measured by the CWS index, and the MBEA, but tonal language speakers outperformed non-tonal language speakers only on the MBEA. A closer look at the discrimination and consistency component scores of the CWS index revealed that musicians were better at discriminating different pitches and more consistent in their assessments of the direction and magnitude of relative pitch change.
Gialluisi, Alessandro; Visconti, Alessia; Willcutt, Erik G; Smith, Shelley D; Pennington, Bruce F; Falchi, Mario; DeFries, John C; Olson, Richard K; Francks, Clyde; Fisher, Simon E
Reading and language skills have overlapping genetic bases, most of which are still unknown. Part of the missing heritability may be caused by copy number variants (CNVs). In a dataset of children recruited for a history of reading disability (RD, also known as dyslexia) or attention deficit hyperactivity disorder (ADHD) and their siblings, we investigated the effects of CNVs on reading and language performance. First, we called CNVs with PennCNV using signal intensity data from Illumina OmniExpress arrays (~723,000 probes). Then, we computed the correlation between measures of CNV genomic burden and the first principal component (PC) score derived from several continuous reading and language traits, both before and after adjustment for performance IQ. Finally, we screened the genome, probe-by-probe, for association with the PC scores, through two complementary analyses: we tested a binary CNV state assigned for the location of each probe (i.e., CNV+ or CNV-), and we analyzed continuous probe intensity data using FamCNV. No significant correlation was found between measures of CNV burden and PC scores, and no genome-wide significant associations were detected in probe-by-probe screening. Nominally significant associations were detected (p~10(-2)-10(-3)) within CNTN4 (contactin 4) and CTNNA3 (catenin alpha 3). These genes encode cell adhesion molecules with a likely role in neuronal development, and they have been previously implicated in autism and other neurodevelopmental disorders. A further, targeted assessment of candidate CNV regions revealed associations with the PC score (p~0.026-0.045) within CHRNA7 (cholinergic nicotinic receptor alpha 7), which encodes a ligand-gated ion channel and has also been implicated in neurodevelopmental conditions and language impairment. FamCNV analysis detected a region of association (p~10(-2)-10(-4)) within a frequent deletion ~6 kb downstream of ZNF737 (zinc finger protein 737, uncharacterized protein), which was also
Navigating Hybridized Language Learning Spaces through Translanguaging Pedagogy: Dual Language Preschool Teachers' Languaging Practices in Support of Emergent Bilingual Children's Performance of Academic Discourse
Gort, Mileidis; Sembiante, Sabrina Francesca
In recent years, there has been a growing interest among policymakers, practitioners, and researchers in early bilingual development and the unique role of the educational setting's language policy in this development. In this article, we describe how one dual language preschool teacher, in partnership with two co-teachers, navigated the tensions…
Cooper, Angela; Bradlow, Ann R.
Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...
Winegarden, Babbi; Glaser, Dale; Schwartz, Alan; Kelly, Carolyn
Medical College Admission Test (MCAT) scores are widely used as part of the decision-making process for selecting candidates for admission to medical school. Applicants who learned English as a second language may be at a disadvantage when taking tests in their non-native language. Preliminary research found significant differences between English language learners (ELLs), applicants who learned English after the age of 11 years, and non-ELL examinees on the Verbal Reasoning (VR) sub-test of the MCAT. The purpose of this study was to determine if relationships between VR sub-test scores and measures of medical school performance differed between ELL and non-ELL students. Scores on the MCAT VR sub-test and student performance outcomes (grades, examination scores, and markers of distinction and difficulty) were extracted from University of California San Diego School of Medicine admissions files and the Association of American Medical Colleges database for 924 students who matriculated in 1998-2005 (graduation years 2002-2009). Regression models were fitted to determine whether MCAT VR sub-test scores predicted medical school performance similarly for ELLs and non-ELLs. For several outcomes, including pre-clerkship grades, academic distinction, US Medical Licensing Examination Step 2 Clinical Knowledge scores and two clerkship shelf examinations, ELL status significantly affects the ability of the VR score to predict performance. Higher correlations between VR score and medical school performance emerged for non-ELL students than for ELL students for each of these outcomes. The MCAT VR score should be used with discretion when assessing ELL applicants for admission to medical school. © Blackwell Publishing Ltd 2012.
There are conflicting claims among scholars on whether the structural outputs of the types of English spoken in countries where English is used as a second language gives such speech forms the status of varieties of English. This study examined those morphological features considered to be marked features of the variety spoken in Nigeria according to Kirkpatrick (2011) and the variety spoken in Malaysia by considering the claims of the Missing Surface Inflection Hypothesis (MSIH) a Second Lan...
South Africa have been affected by the policies of apartheid, and its educational and linguistic consequences, in a .... teaching strategies, and more recently of the perception that a signed language is a manual form ... of color and on the basis of the (former) official spoken languages designated by the apartheid education ...
In Africa there are a number of languages spoken, some of which have their own indigenous scripts that are used for writing. In this paper we assess these languages and present an in-depth script analysis for the Amharic writing system, one of the well-known indigenous scripts of Africa. Amharic is the official and working ...
THE PRESENT STUDY IS A TRANSLATION OF THE WORK "STROI ARABSKOGO YAZYKA" BY THE EMINENT RUSSIAN LINGUIST AND SEMITICS SCHOLAR, N.Y. YUSHMANOV. IT DEALS CONCISELY WITH THE POSITION OF ARABIC AMONG THE SEMITIC LANGUAGES AND THE RELATION OF THE LITERARY (CLASSICAL) LANGUAGE TO THE VARIOUS MODERN SPOKEN DIALECTS, AND PRESENTS A CONDENSED BUT…
bilingualism and extensive translation. May it be noted that all the languages of the world. (7000 in number (cf. Akinlabi and Connell, 2007)) cannot be spoken even skeletally by any individual. Therefore the multilingualism proposed by Crystal will have to favor only a few world languages. Toolan's extensive bilingualism is ...
This paper surveys some of the changes in teaching the four language skills in the past 15 years. It focuses on two main changes for each skill: understanding spoken language and willingness to communicate for speaking; product, process, and genre approaches and a focus on feedback for writing; extensive reading and literature for reading; and…
Rinaldi, M Cristina; Pizzamiglio, Luigi
We present data from right brain-damaged patients, with and without spatial heminattention, which show the influence of hemispatial deficits on spoken language processing. We explored the findings of a previous study, which used an emphatic stress detection task and suggested spatial transcoding of a spoken active sentence in a 'language line'. This transcoding was impaired in its initial portion (the subject-word) when the neglect syndrome was present. By expanding the original methodology, the present study provides a deeper understanding of the level of spoken language processing involved in the heminattentional bias. To ascertain the role played by syntactic structure, active and passive sentences were compared. Sentences comprised of musical notes and of a sequence of unrelated nouns were also compared to determine whether the bias was manifest with any sequence of events (not only linguistic ones) deployed over time, and with a sequence of linguistic events not embedded in a structured syntactic frame. Results showed that heminattention exerted an influence only when a syntactically structured linguistic input (=sentence with agent of action, action and recipient of action) was processed, and that it did not interfere when a sequence of non-linguistic sounds or unrelated words was presented. Furthermore, when passing from active to passive sentences, the heminattentional bias was inverted, suggesting that heminattention primarily involves the logical subject of the sentence, which has an inverted position in passive sentences. These results strongly suggest that heminattention acts on the spatial transcoding of the deep structure of spoken language.
Shuai, Lan; Malins, Jeffrey G
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.
Cattani, Allegra; Abbot-Smith, Kirsten; Farag, Rafalla; Krott, Andrea; Arreckx, Frédérique; Dennis, Ian; Floccia, Caroline
Bilingual children are under-referred due to an ostensible expectation that they lag behind their monolingual peers in their English acquisition. The recommendations of the Royal College of Speech and Language Therapists (RCSLT) state that bilingual children should be assessed in both the languages known by the children. However, despite these recommendations, a majority of speech and language professionals report that they assess bilingual children only in English as bilingual children come from a wide array of language backgrounds and standardized language measures are not available for the majority of these. Moreover, even when such measures do exist, they are not tailored for bilingual children. It was asked whether a cut-off exists in the proportion of exposure to English at which one should expect a bilingual toddler to perform as well as a monolingual on a test standardized for monolingual English-speaking children. Thirty-five bilingual 2;6-year-olds exposed to British English plus an additional language and 36 British monolingual toddlers were assessed on the auditory component of the Preschool Language Scale, British Picture Vocabulary Scale and an object-naming measure. All parents completed the Oxford Communicative Development Inventory (Oxford CDI) and an exposure questionnaire that assessed the proportion of English in the language input. Where the CDI existed in the bilingual's additional language, these data were also collected. Hierarchical regression analyses found the proportion of exposure to English to be the main predictor of the performance of bilingual toddlers. Bilingual toddlers who received 60% exposure to English or more performed like their monolingual peers on all measures. K-means cluster analyses and Levene variance tests confirmed the estimated English exposure cut-off at 60% for all language measures. Finally, for one additional language for which we had multiple participants, additional language CDI production scores were
Stern, Alissa Joy
For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…
Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…
Ling, Bernadette; Kettle, Margaret
In second language classrooms, listening is gaining recognition as an active element in the processes of learning and using a second language. Currently, however, much of the teaching of listening prioritises comprehension without sufficient emphasis on the skills and strategies that enhance learners' understanding of spoken language. This paper…
In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…
Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy
Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…
Bird, Sonya; Kell, Sarah
Most Indigenous language revitalization programs in Canada currently emphasize spoken language. However, virtually no research has been done on the role of pronunciation in the context of language revitalization. This study set out to gain an understanding of attitudes around pronunciation in the SENCOTEN-speaking community, in order to determine…
Puskás, Tünde; Björk-Willén, Polly
This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…
a case for the existence of a Kiswahili sign language since KSL is a natural language with its own autonomous grammar distinct from that of any spoken language. In this paper, we shall argue that the Kiswahili mouthed KSL signs are an outcome of contact between KSL – Kiswahili bilinguals and their hearing Kiswahili ...
Byram, Michael; Wagner, Manuela
Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…
The Ndebele language corpus described here is that compiled by the ALLEX Project (now ALRI) at the University of Zimbabwe. It is intended to reflect as much as possible the Ndebele language as spoken in Zimbabwe. The Ndebele language corpus was built in order to provide much-needed material for the study of the ...
A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)
Full Text Available This article motivates the usage of graphics and visualization for efficient utilization of High Performance Fortran's (HPF's data distribution facilities. It proposes a graphical toolkit consisting of exploratory and estimation tools which allow the programmer to navigate through complex distributions and to obtain graphical ratings with respect to load distribution and communication. The toolkit has been implemented in a mapping design and visualization tool which is coupled with a compilation system for the HPF predecessor Vienna Fortran. Since this language covers a superset of HPF's facilities, the tool may also be used for visualization of HPF data structures.
Full Text Available This article reports the findings of an action research study on a professional development program and its impact on the classroom performance of in-service English teachers who worked at a language institute of a Colombian state university. Questionnaires, semi-structured interviews, class observations, and a researcher’s journal were used as data collection instruments. Findings suggest that these in-service teachers improved their classroom performance as their teaching became more communicative, organized, attentive to students’ needs, and principled. In addition, theory, practice, reflection, and the role of the tutor combined effectively to help the in-service teachers improve classroom performance. It was concluded that these programs must be based on teachers’ philosophies and needs and effectively articulate theory, practice, experience, and reflection.
Rommers, Joost; Meyer, Antje S; Huettig, Falk
The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.
Newman, Aaron J.; Supalla, Ted; Hauser, Peter; Newport, Elissa; Bavelier, Daphne
Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages, but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices, as well as the basal ganglia, medial frontal and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages. PMID:20347996
Farfan, Jose Antonio Flores
Even though Nahuatl is the most widely spoken indigenous language in Mexico, it is endangered. Threats include poor support for Nahuatl-speaking communities, migration of Nahuatl speakers to cities where English and Spanish are spoken, prejudicial attitudes toward indigenous languages, lack of contact between small communities of different…
Richtsmeier, Peter T; Goffman, Lisa
Children with specific language impairment (SLI) often perform below expected levels, including on tests of motor skill and in learning tasks, particularly procedural learning. In this experiment we examined the possibility that children with SLI might also have a motor learning deficit. Twelve children with SLI and thirteen children with typical development (TD) produced complex nonwords in an imitation task. Productions were collected across three blocks, with the first and second blocks on the same day and the third block one week later. Children's lip movements while producing the nonwords were recorded using an Optotrak camera system. Movements were then analyzed for production duration and stability. Movement analyses indicated that both groups of children produced shorter productions in later blocks (corroborated by an acoustic analysis), and the rate of change was comparable for the TD and SLI groups. A nonsignificant trend for more stable productions was also observed in both groups. SLI is regularly accompanied by a motor deficit, and this study does not dispute that. However, children with SLI learned to make more efficient productions at a rate similar to their peers with TD, revealing some modification of the motor deficit associated with SLI. The reader will learn about deficits commonly associated with specific language impairment (SLI) that often occur alongside the hallmark language deficit. The authors present an experiment showing that children with SLI improved speech motor performance at a similar rate compared to typically developing children. The implication is that speech motor learning is not impaired in children with SLI. Copyright © 2015 Elsevier Inc. All rights reserved.
extent of the emphasis on the acquisition vocabulary in school curricula. After a brief introduction, the author looks in chapter 2 at major books which in the. 20th century worked on a controlled vocabulary for foreign-language learners in Europe, Asia and America. This section provides the background for the elaboration of ...
Gabriele Stein. Developing Your English Vocabulary: A Systematic New. Approach. 2002, VIII + 272 pp. ... objective of this book is twofold: to compile a lexical core and to maximise the skills of language students by ... chapter 3, she offers twelve major ways of expanding this core-word list and differentiating lexical items to ...
data of the corpus and includes more formal audio material (lectures, TV and radio broadcasting). The book begins with a 20-page introduction, which is sometimes quite technical, but ... grounds words that belong to the core vocabulary of the language such as tool-. Lexikos 15 (AFRILEX-reeks/series 15: 2005): 338-339 ...
Gollan, Tamar H; Starr, Jennie; Ferreira, Victor S
Acquiring a heritage language (HL), a minority language spoken primarily at home, is often a major step toward achieving bilingualism. Two studies examined factors that promote HL proficiency. Chinese-English and Spanish-English undergraduates and Hebrew-English children named pictures in both their languages, and they or their parents completed language history questionnaires. HL picture-naming ability correlated positively with the number of different HL speakers participants spoke to as children, independently of each language's frequency of use, and without negatively affecting English picture-naming ability. HL performance increased also when primary caregivers had lower English proficiency, with later English age of acquisition, and (in children) with increased age. These results suggest a prescription for increasing bilingual proficiency is regular interaction with multiple HL speakers. Responsible cognitive mechanisms could include greater variety of words used by different speakers, representational robustness from exposure to variations in form, or multiple retrieval cues, perhaps analogous to contextual diversity effects.
Wang, Shenggao; Vásquez, Camilla
This quasi-experimental study examined whether there was any difference in the quantity and quality of the written texts produced by two groups (N = 18) of intermediate Chinese language learners. Over one semester, students in the experimental (E) group wrote weekly updates and comments in Chinese on a designated Facebook group page, while…
Kemmerer, David; Tranel, Daniel; Manzel, Ken
We describe a brain-damaged subject, RR, who manifests superior written over spoken naming of concrete entities from a wide range of conceptual domains. His spoken naming difficulties are due primarily to an impairment of lexical-phonological processing, which implies that his successful written naming does not depend on prior access to the sound structures of words. His performance therefore provides further support for the "orthographic autonomy hypothesis," which maintains that written word production is not obligatorily mediated by phonological knowledge. The case of RR is especially interesting, however, because for him the dissociation between impaired spoken naming and relatively preserved written naming is significantly greater for two categories of unique concrete entities that are lexicalised as proper nouns-specifically, famous faces and famous landmarks-than for five categories of nonunique (i.e., basic level) concrete entities that are lexicalised as common nouns-specifically, animals, fruits/vegetables, tools/utensils, musical instruments, and vehicles. Furthermore, RR's predominant error types in the oral modality are different for the two types of stimuli: omissions for unique entities vs. semantic errors for nonunique entities. We consider two alternative explanations for RR's extreme difficulty in producing the spoken forms of proper nouns: (1) a disconnection between the meanings of proper nouns and the corresponding word nodes in the phonological output lexicon; or (2) damage to the word nodes themselves. We argue that RR's combined behavioural and lesion data do not clearly adjudicate between the two explanations, but that they favour the first explanation over the second.
Janse, Esther; Jesse, Alexandra
Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.
Goh, Winston D; Yap, Melvin J; Lau, Mabel C; Ng, Melvin M R; Tan, Luuan-Chin
A large number of studies have demonstrated that semantic richness dimensions [e.g., number of features, semantic neighborhood density, semantic diversity , concreteness, emotional valence] influence word recognition processes. Some of these richness effects appear to be task-general, while others have been found to vary across tasks. Importantly, almost all of these findings have been found in the visual word recognition literature. To address this gap, we examined the extent to which these semantic richness effects are also found in spoken word recognition, using a megastudy approach that allows for an examination of the relative contribution of the various semantic properties to performance in two tasks: lexical decision, and semantic categorization. The results show that concreteness, valence, and number of features accounted for unique variance in latencies across both tasks in a similar direction-faster responses for spoken words that were concrete, emotionally valenced, and with a high number of features-while arousal, semantic neighborhood density, and semantic diversity did not influence latencies. Implications for spoken word recognition processes are discussed.
Turkan, Sultan; Liu, Ou Lydia
The performance of English language learners (ELLs) has been a concern given the rapidly changing demographics in US K-12 education. This study aimed to examine whether students' English language status has an impact on their inquiry science performance. Differential item functioning (DIF) analysis was conducted with regard to ELL status on an inquiry-based science assessment, using a multifaceted Rasch DIF model. A total of 1,396 seventh- and eighth-grade students took the science test, including 313 ELL students. The results showed that, overall, non-ELLs significantly outperformed ELLs. Of the four items that showed DIF, three favored non-ELLs while one favored ELLs. The item that favored ELLs provided a graphic representation of a science concept within a family context. There is some evidence that constructed-response items may help ELLs articulate scientific reasoning using their own words. Assessment developers and teachers should pay attention to the possible interaction between linguistic challenges and science content when designing assessment for and providing instruction to ELLs.
This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…
Dekel, N.; Brosh, H.
This paper describes from a linguistic point of view the impact of the Hebrew spoken in Israel on the Arabic spoken natively by Israeli Arabs. Two main conditions enable mutual influences between Hebrew and Arabic in Israel: - The existence of two large groups of people speaking both languages
Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.
Perniss, P.M.; Zwitserlood, I.E.P.; Özyürek, A.
The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature
Blom, W.B.T.; van Dijk, Chantal; Vasic, Nada; van Witteloostuijn, Merel; Avrutin, S.
The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages
bilingualism in the natural sign language and the dominant spoken language of the society. Students would study not only the common curriculum shared with their hearing peers, but would also study the history of the Deaf culture and Deaf communities in other parts of the world. Thus, the goal of such a programme would ...
Marshall, Chloë; Jones, Anna; Denmark, Tanya; Mason, Kathryn; Atkinson, Joanna; Botting, Nicola; Morgan, Gary
Several recent studies have suggested that deaf children perform more poorly on working memory tasks compared to hearing children, but these studies have not been able to determine whether this poorer performance arises directly from deafness itself or from deaf children's reduced language exposure. The issue remains unresolved because findings come mostly from (1) tasks that are verbal as opposed to non-verbal, and (2) involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities within a normal developmental timeframe for language acquisition). A more direct, and therefore stronger, test of the hypothesis that the type and quality of language exposure impact working memory is to use measures of non-verbal working memory (NVWM) and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition and reduced quality of language input compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6–11 years: hearing children (n = 28), deaf children who were native users of British Sign Language (BSL; n = 8), and deaf children who used BSL but who were not native signers (n = 19). We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and whether scores on language tasks predicted scores on NVWM tasks. For the two executive-loaded NVWM tasks included in our battery, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another). Multiple regression analysis revealed that scores on the vocabulary
Marshall, Chloë; Jones, Anna; Denmark, Tanya; Mason, Kathryn; Atkinson, Joanna; Botting, Nicola; Morgan, Gary
Several recent studies have suggested that deaf children perform more poorly on working memory tasks compared to hearing children, but these studies have not been able to determine whether this poorer performance arises directly from deafness itself or from deaf children's reduced language exposure. The issue remains unresolved because findings come mostly from (1) tasks that are verbal as opposed to non-verbal, and (2) involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities within a normal developmental timeframe for language acquisition). A more direct, and therefore stronger, test of the hypothesis that the type and quality of language exposure impact working memory is to use measures of non-verbal working memory (NVWM) and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition and reduced quality of language input compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n = 28), deaf children who were native users of British Sign Language (BSL; n = 8), and deaf children who used BSL but who were not native signers (n = 19). We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and whether scores on language tasks predicted scores on NVWM tasks. For the two executive-loaded NVWM tasks included in our battery, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another). Multiple regression analysis revealed that scores on the vocabulary measure