Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Broersma, Mirjam; Cutler, Anne
L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two…
Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard
Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...
Kobayashi, Yuichiro; Abe, Mariko
The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…
Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan
How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349
Kyle Tran Myhre
Full Text Available In "Dust," spoken word poet Kyle "Guante" Tran Myhre crafts a multi-vocal exploration of the connections between the internment of Japanese Americans during World War II and the current struggles against xenophobia in general and Islamophobia specifically. Weaving together personal narrative, quotes from multiple voices, and "verse journalism" (a term coined by Gwendolyn Brooks, the poem seeks to bridge past and present in order to inform a more just future.
Yip, Michael C. W.
The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…
The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f
Qu, Qingqing; Damian, Markus F
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,
Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric
WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
In spite of the vast numbers of articles devoted to vocabulary acquisition in a foreign language, few studies address the contribution of lexical knowledge to spoken fluency. The present article begins with basic definitions of the temporal characteristics of oral fluency, summarizing L1 research over several decades, and then presents fluency…
Lauren B. Collister
Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.
Cooper, Angela; Bradlow, Ann R.
Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...
Wang, Shinmin; Allen, Richard J; Fang, Shin-Yi; Li, Ping
The ability to create temporary binding representations of information from different sources in working memory has recently been found to relate to the development of monolingual word recognition in children. The current study explored this possible relationship in an adult word-learning context. We assessed whether the relationship between cross-modal working memory binding and lexical development would be observed in the learning of associations between unfamiliar spoken words and their semantic referents, and whether it would vary across experimental conditions in first- and second-language word learning. A group of English monolinguals were recruited to learn 24 spoken disyllable Mandarin Chinese words in association with either familiar or novel objects as semantic referents. They also took a working memory task in which their ability to temporarily bind auditory-verbal and visual information was measured. Participants' performance on this task was uniquely linked to their learning and retention of words for both novel objects and for familiar objects. This suggests that, at least for spoken language, cross-modal working memory binding might play a similar role in second language-like (i.e., learning new words for familiar objects) and in more native-like situations (i.e., learning new words for novel objects). Our findings provide new evidence for the role of cross-modal working memory binding in L1 word learning and further indicate that early stages of picture-based word learning in L2 might rely on similar cognitive processes as in L1.
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…
Sirc, Geoffrey; Sutton, Terri
In June 2008, the Department of English at the University of Minnesota partnered with the Minnesota Spoken Word Association to inaugurate an outreach literacy program for local high-school students and teachers. The four-day institute, named "In Da Tradition," used spoken word and hip hop to teach academic and creative writing to core-city…
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Yip, Michael C.
Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…
Hamada, Megumi; Goya, Hideki
This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…
The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…
Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.
Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34, 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…
de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.
Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and
Weber, A.C.; Cutler, A.
Six eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target
Bordag, Denisa; Kirschenbaum, Amit; Rogahn, Maria; Opitz, Andreas
The present semantic priming study explores the integration of newly learnt L2 German words into the L2 semantic network of German advanced learners. It provides additional evidence in support of earlier findings reporting semantic inhibition effects for emergent representations. An inhibitory mechanism is proposed that temporarily decreases the…
L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…
Gabriele Stein. Developing Your English Vocabulary: A Systematic New. Approach. 2002, VIII + 272 pp. ... objective of this book is twofold: to compile a lexical core and to maximise the skills of language students by ... chapter 3, she offers twelve major ways of expanding this core-word list and differentiating lexical items to ...
data of the corpus and includes more formal audio material (lectures, TV and radio broadcasting). The book begins with a 20-page introduction, which is sometimes quite technical, but ... grounds words that belong to the core vocabulary of the language such as tool-. Lexikos 15 (AFRILEX-reeks/series 15: 2005): 338-339 ...
Zhang, Xujin; Samuel, Arthur G.
The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407
Full Text Available This study explored the effect of two enhancement techniques on L2 learners' look-up behaviour during a reading task and word retention afterwards amongst Flemish learners of German: a Vocabulary Test Announcement and Task-induced Word Relevance. Eighty-four participants were recruited for this study. They were randomly assigned to one of two groups: 1 not forewarned of an upcoming vocabulary test (incidental condition or 2 forewarned of a vocabulary test (intentional condition. Task-induced Word Relevance was operationalized by a reading comprehension task. The relevance factor comprised two levels: plus-relevant and minus-relevant target words. Plus-relevant words needed to be looked up and used receptively in order to answer the comprehension questions. In other words, the reading comprehension task could not be accomplished without knowing the meaning of the plus-relevant words. The minus-relevant target words, on the other hand, were not linked to the reading comprehension questions. Our findings show a significant effect of Test Announcement and Word Relevance on whether a target word is looked up. In addition, Word Relevance also affects the frequency of clicks on target words. Word retention is only influenced by Task-induced Word Relevance. The effect of Word Relevance is durable.
González-Alvarez, Julio; Palomar-García, María-Angeles
Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.
Clara Herlina Karjo
Full Text Available Stress placement in English words is governed by highly complicated rules. Thus, assigning stress correctly in English words has been a challenging task for L2 learners, especially Indonesian learners since their L1 does not recognize such stress system. This study explores the production of English word stress by 30 university students. The method used for this study is immediate repetition task. Participants are instructed to identify the stress placement of 80 English words which are auditorily presented as stimuli and immediately repeat the words with correct stress placement. The objectives of this study are to find out whether English word stress placement is problematic for L2 learners and to investigate the phonological factors which account for these problems. Research reveals that L2 learners have different ability in producing the stress, but three-syllable words are more problematic than two-syllable words. Moreover, misplacement of stress is caused by, among others, the influence of vowel lenght and vowel height.
Full Text Available Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition and background noise (noise condition using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.
Hamada, Megumi; Goya, Hideki
This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.
Chan, Ricky K. W.; Leung, Janny H. C.
This article reports an experiment on the implicit learning of second language stress regularities, and presents a methodological innovation on awareness measurement. After practising two-syllable Spanish words, native Cantonese speakers with English as a second language (L2) completed a judgement task. Critical items differed only in placement of…
Wegener, Signy; Wang, Hua-Chen; de Lissa, Peter; Robidoux, Serje; Nation, Kate; Castles, Anne
There is an established association between children's oral vocabulary and their word reading but its basis is not well understood. Here, we present evidence from eye movements for a novel mechanism underlying this association. Two groups of 18 Grade 4 children received oral vocabulary training on one set of 16 novel words (e.g., 'nesh', 'coib'), but no training on another set. The words were assigned spellings that were either predictable from phonology (e.g., nesh) or unpredictable (e.g., koyb). These were subsequently shown in print, embedded in sentences. Reading times were shorter for orally familiar than unfamiliar items, and for words with predictable than unpredictable spellings but, importantly, there was an interaction between the two: children demonstrated a larger benefit of oral familiarity for predictable than for unpredictable items. These findings indicate that children form initial orthographic expectations about spoken words before first seeing them in print. A video abstract of this article can be viewed at: https://youtu.be/jvpJwpKMM3E. © 2017 John Wiley & Sons Ltd.
Pitt, Mark A.
One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments [Gaskell, G., & Marslen-Wilson, W. D. (1998). Mechanisms of phonological inference in speech perception.…
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Zamuner, Tania S; Moore, Charlotte; Desmeules-Trudel, Félix
To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. Copyright © 2016 Elsevier Inc. All rights reserved.
Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…
Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua
Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word
Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593
James S. Magnuson
Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous.......g., lobe) faster than words with consistent rhymes where the vowel has a less typical spelling (e.g., loaf). The present study extends previous literature by showing that auditory word recognition is affected by orthographic regularities at different grain sizes, just like written word recognition...... and spelling. The theoretical and methodological implications for future research in spoken word recognition are discussed....
L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.
Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob
This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…
This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.
Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L
Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.
Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee
Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…
Full Text Available The revised hierarchical model seems different from the distributed conceptual feature model in predicting how unbalanced bilinguals would be aware of semantic relations between words for taxonomic categories of basic level (exemplar words and words for those of superordinate level (category names in L2. We did a series of four experiments to compare unbalanced bilinguals’ awareness of conceptual relations between exemplar words and between exemplar words and category names in their first (L1 and second language (L2. A priming task of semantic categorization was adopted, and the participants were 72 college students, who began to learn L2 in classroom settings at a late age and achieved an L2 proficiency between intermediate and advanced levels. The reaction times indicated that the participants could automatically process not only the exemplar-word but also the category-name primes in L2. Activations of semantic representations for the category names in L2 seemed to spread to those for the exemplar words in L1 and L2, but activations of semantic presentations for the exemplar words in L2 spread only to those for the example words in L1 for the participants. It was concluded that unbalanced bilinguals appear to have developed asymmetric associations between category names and exemplar words in L2. The implication is that L2 learners should learn L2 words mainly by means of using the language and not taking rote memory of isolated words.
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…
Revill, Kathleen Pirog; Spieler, Daniel H.
When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults’ eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age. PMID:21707175
Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan
We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.
Matthews, Joshua; O'Toole, John Mitchell
The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…
Ke, Sihui Echo; Koda, Keiko
This study examined the contributions of morphological awareness (MA) to second language (L2) word meaning inferencing in English-speaking adult learners of Chinese (N = 50). Three research questions were posed: Are L2 learners sensitive to the morphological structure of unknown multi-character words? Does first language (L1) MA contribute to L2…
Brouwer, S.; Mitterer, H.; Huettig, F.
In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" ...
Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.
Nava, Andrea; Pedrazzini, Luciana
We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…
Brouwer, Susanne; Bradlow, Ann R.
This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
Piai, V.; Roelofs, A.P.A.; Rommers, J.; Maris, E.G.G.
Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in
Yip, Michael C. W.
Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using…
Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou
This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…
Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)
Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na
Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M
Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Ranbom, Larissa J.; Connine, Cynthia M.
Four experiments are reported that investigate processing of mispronounced words for which the phonological form is inconsistent with the graphemic form (words spelled with silent letters). Words produced as mispronunciations that are consistent with their spelling were more confusable with their citation form counterpart than mispronunciations…
Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara
Converging evidence suggests that understanding our first-language (L1) results in reactivation of experiential sensorimotor traces in the brain. Surprisingly, little is known regarding the involvement of these processes during second-language (L2) processing. Participants saw L1 or L2 words referring to entities with a typical location (e.g., star, mole) (Experiment 1 & 2) or to an emotion (e.g., happy, sad) (Experiment 3). Participants responded to the words' ink color with an upward or downward arm movement. Despite word meaning being fully task-irrelevant, L2 automatically activated motor responses similar to L1 even when L2 was acquired rather late in life (age >11). Specifically, words such as star facilitated upward, and words such as root facilitated downward responses. Additionally, words referring to positive emotions facilitated upward, and words referring to negative emotions facilitated downward responses. In summary our study suggests that reactivation of experiential traces is not limited to L1 processing. Copyright © 2014 Elsevier Inc. All rights reserved.
An implicit word learning paradigm was designed to test the hypothesis that children who came to the task of L2 vocabulary acquisition with poorer L1 phonological awareness (PA) are less capable of extracting phonological patterns from L2 and thus have difficulties capitalizing on this knowledge to support L2 vocabulary learning. A group of Chinese-speaking six-grade students took a multi-trial L2 (English) word learning task after being exposed to a set of familiar words that rhymed with the target words. Children's PA was measured at grade 3. Children with relatively poorer L1 PA and those with better L1 PA did not differ in identifying the forms of the new words. However, children with poorer L1 PA demonstrated reduced performance in naming pictures with labels that rhymed with the pre-exposure words than with labels that did not rhyme with the pre-exposure words. Children with better L1 PA were not affected by the recurring rime shared by the pre-exposure words and the target words. These findings suggest that poor L1 PA may impede L2 word learning via difficulty in abstracting phonological patterns away from L2 input to scaffold word learning.
Acoustic and perceptual studies investgate B2-level Polish learners' acquisition of second language (L2) English word-boundaries involving word-initial vowels. In production, participants were less likely to produce glottalization of phrase-medial initial vowels in L2 English than in first language (L1) Polish. Perception studies employing word…
Research with native speakers indicates that, during word recognition, regularly inflected words undergo parsing that segments them into stems and affixes. In contrast, studies with learners suggest that this parsing may not take place in L2. This study's research questions are: Do L2 Spanish learners store and process regularly inflected,…
An implicit word learning paradigm was designed to test the hypothesis that children who came to the task of L2 vocabulary acquisition with poorer L1 phonological awareness (PA) are less capable of extracting phonological patterns from L2 and thus have difficulties capitalizing on this knowledge to support L2 vocabulary learning. A group of…
Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy
Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.
Otake, T.; McQueen, J.M.; Cutler, A.
Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors,
Burton, Harold; Sinclair, Robert J; Agato, Alvin
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.
Williams, Joshua T.; Newman, Sharlene D.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.
Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…
Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy
Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual varia...
Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan
In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Shen, Wei; Qu, Qingqing; Tong, Xiuhong
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.
Strand, Julia F; Sommers, Mitchell S
Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America
Shtyrov, Yury; Kimppa, Lilli; Pulvermüller, Friedemann
, in passive non-attend conditions, with acoustically matched high- and low-frequency words along with pseudo-words. Using factorial and correlation analyses, we found that already at ~120 ms after the spoken stimulus information was available, amplitude of brain responses was modulated by the words' lexical...... for the most frequent word stimuli, later-on (~270 ms), a more global lexicality effect with bilateral perisylvian sources was found for all stimuli, suggesting faster access to more frequent lexical entries. Our results support the account of word memory traces as interconnected neuronal circuits, and suggest......How are words represented in the human brain and can these representations be qualitatively assessed with respect to their structure and properties? Recent research demonstrates that neurophysiological signatures of individual words can be measured when subjects do not focus their attention...
A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker's focusing on a target concept and ending with the initiation of articulation. The initial
Roelofs, A.P.A.; Piai, V.
Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot
Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.
Zhang, Qingfang; Wang, Cheng
The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.
Goh, Winston D; Yap, Melvin J; Lau, Mabel C; Ng, Melvin M R; Tan, Luuan-Chin
A large number of studies have demonstrated that semantic richness dimensions [e.g., number of features, semantic neighborhood density, semantic diversity , concreteness, emotional valence] influence word recognition processes. Some of these richness effects appear to be task-general, while others have been found to vary across tasks. Importantly, almost all of these findings have been found in the visual word recognition literature. To address this gap, we examined the extent to which these semantic richness effects are also found in spoken word recognition, using a megastudy approach that allows for an examination of the relative contribution of the various semantic properties to performance in two tasks: lexical decision, and semantic categorization. The results show that concreteness, valence, and number of features accounted for unique variance in latencies across both tasks in a similar direction-faster responses for spoken words that were concrete, emotionally valenced, and with a high number of features-while arousal, semantic neighborhood density, and semantic diversity did not influence latencies. Implications for spoken word recognition processes are discussed.
Rohde, Hannah; Ettlinger, Marc
Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate information from the two most disparate domains: pragmatic inference and phonetic perception. Using contexts that trigger pragmatic expectations regarding upcoming coreference (expectations for either he or she), we test listeners' identification of phonetic category boundaries (using acoustically ambiguous words on the/hi/∼/∫i/continuum). The results indicate that, in addition to phonetic cues, word recognition also reflects pragmatic inference. These findings are consistent with evidence for top-down contextual effects from lexical, syntactic, and semantic cues, but they extend this previous work by testing cues at the pragmatic level and by eliminating a statistical-frequency confound that might otherwise explain the previously reported results. We conclude by exploring the time-course of this interaction and discussing how different models of cue integration could be adapted to account for our results. PMID:22250908
Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan
Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements.
Charalabopoulou, Frieda; Gavrilidou, Maria; Kokkinakis, Sofie Johansson; Volodina, Elena
Lexical competence constitutes a crucial aspect in L2 learning, since building a rich repository of words is considered indispensable for successful communication. CALL practitioners have experimented with various kinds of computer-mediated glosses to facilitate L2 vocabulary building in the context of incidental vocabulary learning. Intentional…
Metsala, Jamie L.; Stavrinos, Despina; Walley, Amanda C.
This study examined effects of lexical factors on children's spoken word recognition across a 1-year time span, and contributions to phonological awareness and nonword repetition. Across the year, children identified words based on less input on a speech-gating task. For word repetition, older children improved for the most familiar words. There…
Slote, Joseph; Strand, Julia F
Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.
Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas
Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.
Zyzik, Eve; Azevedo, Clara
Although the problem of word class has been explored in numerous first language studies, relatively little is known about this process in SLA. The present study measures second language (L2) learners' knowledge of word class distinctions (e.g., noun vs. adjective) in a variety of syntactic contexts. English-speaking learners of Spanish from…
Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung
This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.
Full Text Available Previous studies have found quantity of exposure, i.e., frequency of exposure (Horst et al., 1998; Webb, 2008; Pellicer-Sánchez and Schmitt, 2010, is important for second language (L2 contextual word learning. Besides this factor, context constraint and L2 proficiency level have also been found to affect contextual word learning (Pulido, 2003; Tekmen and Daloglu, 2006; Elgort et al., 2015; Ma et al., 2015. In the present study, we adopted the event-related potential (ERP technique and chose high constraint sentences as reading materials to further explore the effects of quantity of exposure and proficiency on L2 contextual word learning. Participants were Chinese learners of English with different English proficiency levels. For each novel word, there were four high constraint sentences with the critical word at the end of the sentence. Learners read sentences and made semantic relatedness judgment afterwards, with ERPs recorded. Results showed that in the high constraint condition where each pseudoword was embedded in four sentences with consistent meaning, N400 amplitude upon this pseudoword decreased significantly as learners read the first two sentences. High proficiency learners responded faster in the semantic relatedness judgment task. These results suggest that in high quality sentence contexts, L2 learners could rapidly acquire word meaning without multiple exposures, and L2 proficiency facilitated this learning process.
Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.
Hirschmüller, Sarah; Egloff, Boris
How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Full Text Available How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a positive emotion word usage base rates in spoken and written materials and (b positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Kemmerer, David; Tranel, Daniel; Manzel, Ken
We describe a brain-damaged subject, RR, who manifests superior written over spoken naming of concrete entities from a wide range of conceptual domains. His spoken naming difficulties are due primarily to an impairment of lexical-phonological processing, which implies that his successful written naming does not depend on prior access to the sound structures of words. His performance therefore provides further support for the "orthographic autonomy hypothesis," which maintains that written word production is not obligatorily mediated by phonological knowledge. The case of RR is especially interesting, however, because for him the dissociation between impaired spoken naming and relatively preserved written naming is significantly greater for two categories of unique concrete entities that are lexicalised as proper nouns-specifically, famous faces and famous landmarks-than for five categories of nonunique (i.e., basic level) concrete entities that are lexicalised as common nouns-specifically, animals, fruits/vegetables, tools/utensils, musical instruments, and vehicles. Furthermore, RR's predominant error types in the oral modality are different for the two types of stimuli: omissions for unique entities vs. semantic errors for nonunique entities. We consider two alternative explanations for RR's extreme difficulty in producing the spoken forms of proper nouns: (1) a disconnection between the meanings of proper nouns and the corresponding word nodes in the phonological output lexicon; or (2) damage to the word nodes themselves. We argue that RR's combined behavioural and lesion data do not clearly adjudicate between the two explanations, but that they favour the first explanation over the second.
Full Text Available Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1 and second-language (L2. A total of 72 children (10 years old participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 sec crossed with two different SNRs (+3 dBA and +12 dBA. Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA improved recall compared to a low SNR (+3 dBA. Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time did not interact with language.
Janse, Esther; Jesse, Alexandra
Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.
João Mendonça Correia
Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in
Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.
Jesse, Alexandra; McQueen, James M
Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.
Roxbury, Tracy; McMahon, Katie; Copland, David A
Evidence for the brain mechanisms recruited when processing concrete versus abstract concepts has been largely derived from studies employing visual stimuli. The tasks and baseline contrasts used have also involved varying degrees of lexical processing. This study investigated the neural basis of the concreteness effect during spoken word recognition and employed a lexical decision task with a novel pseudoword condition. The participants were seventeen healthy young adults (9 females). The stimuli consisted of (a) concrete, high imageability nouns, (b) abstract, low imageability nouns and (c) opaque legal pseudowords presented in a pseudorandomised, event-related design. Activation for the concrete, abstract and pseudoword conditions was analysed using anatomical regions of interest derived from previous findings of concrete and abstract word processing. Behaviourally, lexical decision reaction times for the concrete condition were significantly faster than both abstract and pseudoword conditions and the abstract condition was significantly faster than the pseudoword condition (p word recognition. Significant activity was also elicited by concrete words relative to pseudowords in the left fusiform and left anterior middle temporal gyrus. These findings confirm the involvement of a widely distributed network of brain regions that are activated in response to the spoken recognition of concrete but not abstract words. Our findings are consistent with the proposal that distinct brain regions are engaged as convergence zones and enable the binding of supramodal input.
Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin
Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.
Piai, Vitória; Roelofs, Ardi; Rommers, Joost; Maris, Eric
Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in alpha-beta desynchronization, the memory aspects have remained poorly understood. Using magnetoencephalography, we investigated the neurophysiological signature of not only motor but also memory aspects of spoken-word production. Participants named or judged pictures after reading sentences. To probe the involvement of the memory component, we manipulated sentence context. Sentence contexts were either constraining or nonconstraining toward the final word, presented as a picture. In the judgment task, participants indicated with a left-hand button press whether the picture was expected given the sentence. In the naming task, they named the picture. Naming and judgment were faster with constraining than nonconstraining contexts. Alpha-beta desynchronization was found for constraining relative to nonconstraining contexts pre-picture presentation. For the judgment task, beta desynchronization was observed in left posterior brain areas associated with conceptual processing and in right motor cortex. For the naming task, in addition to the same left posterior brain areas, beta desynchronization was found in left anterior and posterior temporal cortex (associated with memory aspects), left inferior frontal cortex, and bilateral ventral premotor cortex (associated with motor aspects). These results suggest that memory and motor components of spoken word production are reflected in overlapping brain oscillations in the beta band. © 2015 Wiley Periodicals, Inc.
Full Text Available It is unclear whether healthy aging influences concreteness effects (ie. the processing advantage seen for concrete over abstract words and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete versus abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing.
Huang, Xianjun; Yang, Jin-Chen
The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.
Wang, Min; Koda, Keiko; Perfetti, Charles A
Different writing systems in the world select different units of spoken language for mapping. Do these writing system differences influence how first language (L1) literacy experiences affect cognitive processes in learning to read a second language (L2)? Two groups of college students who were learning to read English as a second language (ESL) were examined for their relative reliance on phonological and orthographic processing in English word identification: Korean students with an alphabetic L1 literacy background, and Chinese students with a nonalphabetic L1 literacy background. In a semantic category judgment task, Korean ESL learners made more false positive errors in judging stimuli that were homophones to category exemplars than they did in judging spelling controls. However, there were no significant differences in responses to stimuli in these two conditions for Chinese ESL learners. Chinese ESL learners, on the other hand, made more accurate responses to stimuli that were less similar in spelling to category exemplars than those that were more similar. Chinese ESL learners may rely less on phonological information and more on orthographic information in identifying English words than their Korean counterparts. Further evidence supporting this argument came from a phoneme deletion task in which Chinese subjects performed more poorly overall than their Korean counterparts and made more errors that were phonologically incorrect but orthographically acceptable. We suggest that cross-writing system differences in L1s and L1 reading skills transfer could be responsible for these ESL performance differences.
Caroline M. Whiting
Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.
Vainio, Seppo; Anneli, Pajunen; Hyona, Jukka
This study investigated the effect of the first language (L1) on the visual word recognition of inflected nouns in second language (L2) Finnish by native Russian and Chinese speakers. Case inflection is common in Russian and in Finnish but nonexistent in Chinese. Several models have been posited to describe L2 morphological processing. The unified…
Full Text Available OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i All words presented in a set flat monotonous pitch contour (ii Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41, temporal areas (BA 21 22 bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.
It is an undisputed fact that learning – and remembering – new words is key in successful second language acquisition. And yet researching how vocabulary acquisition takes place is one of the most difficult endeavors in second language acquisition. We can test how many L2 words a learner knows, but
Strori, Dorina; Zaar, Johannes; Cooke, Martin
-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound....... 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality...
Cremer, M.; Dingshoff, D.; de Beer, M.; Schoonen, R.
Differences in word associations between monolingual and bilingual speakers of Dutch can reflect differences in how well seemingly familiar words are known. In this (exploratory) study mono-and bilingual, child and adult free word associations were compared. Responses of children and of monolingual
Cremer, Marjolein; Dingshoff, Daphne; de Beer, Meike; Schoonen, Rob
Differences in word associations between monolingual and bilingual speakers of Dutch can reflect differences in how well seemingly familiar words are known. In this (exploratory) study mono-and bilingual, child and adult free word associations were compared. Responses of children and of monolingual
Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth
Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children (n = 20) and adults (n = 17) were slower to detect pauses in familiar words with later uniqueness points. Faster latencies were obtained for words with late uniqueness points in constraining compared with neutral sentences; no such effect was observed for early unique words. Following exposure to novel competitors ("biscal"), children (n = 18) and adults (n = 18) showed competition for existing words with early uniqueness points ("biscuit") after 24 hr. Thus, online lexical competition effects are remarkably similar across development. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
Pellikka, Janne; Helenius, Päivi; Mäkelä, Jyrki P; Lehtonen, Minna
How do bilinguals manage the activation levels of the two languages and prevent interference from the irrelevant language? Using magnetoencephalography, we studied the effect of context on the activation levels of languages by manipulating the composition of word lists (the probability of the languages) presented auditorily to late Finnish-English bilinguals. We first determined the upper limit time-window for semantic access, and then focused on the preceding responses during which the actual word recognition processes were assumedly ongoing. Between 300 and 500 ms in the temporal cortices (in the N400 m response) we found an asymmetric language switching effect: the responses to L1 Finnish words were affected by the presentation context unlike the responses to L2 English words. This finding suggests that the stronger language is suppressed in an L2 context, supporting models that allow auditory word recognition to be affected by contextual factors and the language system to be subject to inhibitory influence. Copyright © 2015 Elsevier Inc. All rights reserved.
Siew, Cynthia S. Q.; Vitevitch, Michael S.
Network science uses mathematical techniques to study complex systems such as the phonological lexicon (Vitevitch, 2008). The phonological network consists of a "giant component" (the largest connected component of the network) and "lexical islands" (smaller groups of words that are connected to each other, but not to the giant…
Scharenborg, Odette; Coumans, Juul M. J.; van Hout, Roeland
This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on…
Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K
Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.
This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…
Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. In Experiment 1, 69 children with TLD (7-10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7-12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070
Lewis, Gwyneth; Poeppel, David
Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.
Full Text Available Little is known about the ‘material’ equipment of the early missionaries who set out to evangelize pagans and apostates, since the authors of the sources focused mainly on the successes (or failures of the missions. Information concerning the ‘infrastructure’ of missions is rather occasional and of fragmentary nature. The major part in the process of evangelization must have been played by the spoken word preached indirectly or through an interpreter, at least in the areas and milieus remote from the centers of ancient civilization. It could not have been otherwise when coming into contact with communities which did not know the art of reading, still less writing. A little more attention is devoted to the other two media, that is, the written word and the images. The significance of the written word was manifold, and – at least as the basic liturgical books are concerned (the missal, the evangeliary? – the manuscripts were indispensable elements of missionaries’ equipment. In certain circumstances the books which the missionaries had at their disposal could acquire special – even magical – significance, the most comprehensible to the Christianized people (the examples given: the evangeliary of St. Winfried-Boniface in the face of death at the hands of a pagan Frisian, the episode with a manuscript in the story of Anskar’s mission written by Rimbert. The role of the plastic art representations (images during the missions is much less frequently mentioned in the sources. After quoting a few relevant examples (Bede the Venerable, Ermoldus Nigellus, Paul the Deacon, Thietmar of Merseburg, the author also cites an interesting, although not entirely successful, attempt to use drama to instruct the Livonians in the faith while converting them to Christianity, which was reported by Henry of Latvia.
Damian, Markus F; Dorjee, Dusana; Stadthagen-Gonzalez, Hans
Although it is relatively well established that access to orthographic codes in production tasks is possible via an autonomous link between meaning and spelling (e.g., Rapp, Benzing, & Caramazza, 1997), the relative contribution of phonology to orthographic access remains unclear. Two experiments demonstrated persistent repetition priming in spoken and written single-word responses, respectively. Two further experiments showed priming from spoken to written responses and vice versa, which is interpreted as reflecting a role of phonology in constraining orthographic access. A final experiment showed priming from spoken onto written responses even when participants engaged in articulatory suppression during writing. Overall, the results support the view that access to orthography codes is accomplished via both the autonomous link between meaning and spelling and an indirect route via phonology.
The present study examined the effects of multimedia enhancement in video form in addition to textual information on L2 vocabulary instruction for high-level, low-frequency English words among Korean learners of English. Although input-based incidental learning of L2 vocabulary through extensive reading has been conventionally believed to be…
Spoken-word recognition involves multiple activation of alternative word candidates and competition between these alternatives. Phonemic confusions in L2 listening increase the number of potentially active words, thus slowing word recognition by adding competitors. This study used a 70,000-word
Shuai, Lan; Malins, Jeffrey G
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.
Devereux, Barry J; Taylor, Kirsten I; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K
Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation. Copyright © 2015 The Authors. Cognitive Science published by Cognitive Science Society, Inc.
Simon, E.; Escudero, P.; Broersma, M.; Dziubalska-Kołaczyk, K.; Wrembel, M.; Kul, M.
This study examines the effect of proficiency in the L2 (English) and L3 (Dutch) on word learning in the L3. Learners were 92 L1 Spanish speakers with differing proficiencies in L2 and L3, and 20 native speakers of Dutch. The learners were divided into basic and advanced English and Dutch
Time-compressed spoken words enhance driving performance in complex visual scenarios : evidence of crossmodal semantic priming effects in basic cognitive experiments and applied driving simulator studies
Would speech warnings be a good option to inform drivers about time-critical traffic situations? Even though spoken words take time until they can be understood, listening is well trained from the earliest age and happens quite automatically. Therefore, it is conceivable that spoken words could immediately preactivate semantically identical (but physically diverse) visual information, and thereby enhance respective processing. Interestingly, this implies a crossmodal semantic effect of audito...
Rogers, Elizabeth A; Fine, Sarah C; Handley, Margaret A; Davis, Hodari B; Kass, James; Schillinger, Dean
To examine the reach, efficacy, and adoption of The Bigger Picture, a type 2 diabetes (T2DM) social marketing campaign that uses spoken-word public service announcements (PSAs) to teach youth about socioenvironmental conditions influencing T2DM risk. A nonexperimental pilot dissemination evaluation through high school assemblies and a Web-based platform were used. The study took place in San Francisco Bay Area high schools during 2013. In the study, 885 students were sampled from 13 high schools. A 1-hour assembly provided data, poet performances, video PSAs, and Web-based platform information. A Web-based platform featured the campaign Web site and social media. Student surveys preassembly and postassembly (knowledge, attitudes), assembly observations, school demographics, counts of Web-based utilization, and adoption were measured. Descriptive statistics, McNemar's χ 2 test, and mixed modeling accounting for clustering were used to analyze data. The campaign included 23 youth poet-created PSAs. It reached >2400 students (93% self-identified non-white) through school assemblies and has garnered >1,000,000 views of Web-based video PSAs. School participants demonstrated increased short-term knowledge of T2DM as preventable, with risk driven by socioenvironmental factors (34% preassembly identified environmental causes as influencing T2DM risk compared to 83% postassembly), and perceived greater personal salience of T2DM risk reduction (p < .001 for all). The campaign has been adopted by regional public health departments. The Bigger Picture campaign showed its potential for reaching and engaging diverse youth. Campaign messaging is being adopted by stakeholders.
Wang, Lun; Abe, Jun-ichi
This study investigates the relation between phonology and orthography in word recognition in college-level readers with different first languages (L1). It also examines whether word recognition processes in L1 influence those processes in the second language (L2), which was English in the study. Participants were divided into two groups according to their L1 (Japanese, Chinese), and were given semantic category judgment tasks in English in order to compare their degree of reliance on L1 phonology and orthography in L2 word recognition.The results showed that Japanese and Chinese L1 readers differed in using phonological and orthographic information in the L2 English task. The results suggest that reading for meaning in English is affected by prior literacy experiences in reading L1.
Pinhas, Michal; Donohue, Sarah E; Woldorff, Marty G; Brannon, Elizabeth M
Little is known about the neural underpinnings of number word comprehension in young children. Here we investigated the neural processing of these words during the crucial developmental window in which children learn their meanings and asked whether such processing relies on the Approximate Number System. ERPs were recorded as 3- to 5-year-old children heard the words one, two, three, or six while looking at pictures of 1, 2, 3, or 6 objects. The auditory number word was incongruent with the number of visual objects on half the trials and congruent on the other half. Children's number word comprehension predicted their ERP incongruency effects. Specifically, children with the least number word knowledge did not show any ERP incongruency effects, whereas those with intermediate and high number word knowledge showed an enhanced, negative polarity incongruency response (N(inc)) over centroparietal sites from 200 to 500 msec after the number word onset. This negativity was followed by an enhanced, positive polarity incongruency effect (P(inc)) that emerged bilaterally over parietal sites at about 700 msec. Moreover, children with the most number word knowledge showed ratio dependence in the P(inc) (larger for greater compared with smaller numerical mismatches), a hallmark of the Approximate Number System. Importantly, a similar modulation of the P(inc) from 700 to 800 msec was found in children with intermediate number word knowledge. These results provide the first neural correlates of spoken number word comprehension in preschoolers and are consistent with the view that children map number words onto approximate number representations before they fully master the verbal count list.
Full Text Available This study aimed to investigate the effect of the unfamiliar stressed prosody on spoken Thai word perception in the pre-attentive processing of the brain evaluated by the N2a and brain wave oscillatory activity. EEG recording was obtained from eleven participants, who were instructed to ignore the sound stimuli while watching silent movies. Results showed that prosody of unfamiliar stress word perception elicited N2a component and the quantitative EEG analysis found that theta and delta wave powers were principally generated in the frontal area. It was possible that the unfamiliar prosody with different frequencies, duration and intensity of the sound of Thai words induced highly selective attention and retrieval of information from the episodic memory of the pre-attentive stage of speech perception. This brain electrical activity evidence could be used for further study in the development of valuable clinical tests to evaluate the frontal lobe function in speech perception.
Matthews, Joshua; Cheng, Junyu; O'Toole, John Mitchell
This paper reports on the impact of computer-mediated input, output and feedback on the development of second language (L2) word recognition from speech (WRS). A quasi-experimental pre-test/treatment/post-test research design was used involving three intact tertiary level English as a Second Language (ESL) classes. Classes were either assigned to…
Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.
Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…
With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.
Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.
Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…
Chan, Kit Ying; Vitevitch, Michael S.
Clustering coefficient--a measure derived from the new science of networks--refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words "bat", "hat", and "can", all of which are neighbors of the word "cat"; the words "bat" and…
This study examined the effect of first language (L1) phonological awareness on the rate of learning new second language (L2) color terms and the rate of processing old color terms. Two groups of 37 children participated; they differed on L1 phonological awareness measured at Grade 3. At Grade 5, over multiple trials, the children learned new L2…
Liebenthal, Einat; Silbersweig, David A; Stern, Emily
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Vornik, Lana A; Sharman, Stefanie J; Garry, Maryanne
We investigated whether the sociolinguistic information delivered by spoken, accented postevent narratives would influence the misinformation effect. New Zealand subjects listened to misleading postevent information spoken in either a New Zealand (NZ) or North American (NA) accent. Consistent with earlier research, we found that NA accents were seen as more powerful and more socially attractive. We found that accents per se had no influence on the misinformation effect but sociolinguistic factors did: both power and social attractiveness affected subjects' susceptibility to misleading postevent suggestions. When subjects rated the speaker highly on power, social attractiveness did not matter; they were equally misled. However, when subjects rated the speaker low on power, social attractiveness did matter: subjects who rated the speaker high on social attractiveness were more misled than subjects who rated it lower. There were similar effects for confidence. These results have implications for our understanding of social influences on the misinformation effect.
In this study, word knowledge and its relation to text comprehension was examined with 50 Chinese- and 20 Korean-speaking second language (L2) learners and 40 first language (L1) speakers of Japanese. Breadth and depth of word knowledge were assessed by a word-definition matching test and a word-associates selection test, respectively. Text…
Choi, Wonil; Nam, Kichun; Lee, Yoonhyoung
Experiments with Korean learners of English and English monolinguals were conducted to examine whether knowledge of syllabification in the native language (Korean) affects the recognition of printed words in the non-native language (English). Another purpose of this study was to test whether syllables are the processing unit in Korean visual word recognition. In Experiment 1, 26 native Korean speakers and 19 native English speakers participated. In Experiment 2, 40 native Korean speakers participated. In two experiments, syllable length was manipulated based on the Korean syllabification rule and the participants performed a lexical decision task. Analyses of variance were performed for the lexical decision latencies and error rates in two experiments. The results from Korean learners of English showed that two-syllable words based on the Korean syllabification rule were recognized faster as words than various types of three-syllable words, suggesting that Korean learners of English exploited their L1 phonological knowledge in recognizing English words. The results of the current study also support the idea that syllables are a processing unit of Korean visual word recognition.
Sommers, Mitchell S; Barcroft, Joe
Three experiments were conducted to examine the effects of trial-to-trial variations in speaking style, fundamental frequency, and speaking rate on identification of spoken words. In addition, the experiments investigated whether any effects of stimulus variability would be modulated by phonetic confusability (i.e., lexical difficulty). In Experiment 1, trial-to-trial variations in speaking style reduced the overall identification performance compared with conditions containing no speaking-style variability. In addition, the effects of variability were greater for phonetically confusable words than for phonetically distinct words. In Experiment 2, variations in fundamental frequency were found to have no significant effects on spoken word identification and did not interact with lexical difficulty. In Experiment 3, two different methods for varying speaking rate were found to have equivalent negative effects on spoken word recognition and similar interactions with lexical difficulty. Overall, the findings are consistent with a phonetic-relevance hypothesis, in which accommodating sources of acoustic-phonetic variability that affect phonetically relevant properties of speech signals can impair spoken word identification. In contrast, variability in parameters of the speech signal that do not affect phonetically relevant properties are not expected to affect overall identification performance. Implications of these findings for the nature and development of lexical representations are discussed.
Chu, Min-Chin; Chen, Shu-Hui
Empirical evidence shows that explicit phonics teaching is beneficial for English word reading. However, there has been controversy as to whether phonics teaching should incorporate meaning-involved decodable text instruction to facilitate children's word reading. This study compares the effects of phonics teaching with and without decodable text instruction on immediate and delayed English word reading in 117 Taiwanese children learning English, assigned to a Phonics-only group (n = 58) and a phonics plus decodable text instruction (Phonics+) group (n = 59). Results showed that although both groups significantly improved in immediate and delayed post-test word reading, the Phonics+ group performed better in both post-tests, but the difference was significant only in the delayed word reading, suggesting a better long-term retention effect produced by Phonics+ teaching. These indicated that incorporated meaning-involved decodable text reading might offer another better facilitative linking route for English word reading even for non-alphabetic child learners of English. The findings were discussed from linguistic, psycholinguistic, and reading perspectives, with implications drawn for second/foreign language teaching and research in reading instruction.
Shen, Helen H.; Jiang, Xin
This study investigated the relationships between lower-level processing and general reading comprehension among adult L2 (second-language) beginning learners of Chinese, in both target and non-target language learning environments. Lower-level processing in Chinese reading includes the factors of character-naming accuracy, character-naming speed,…
Kuijk, D.J. van; Wittenburg, P.; Dijkstra, A.F.J.; Brinker, B.P.L.M. Den; Beek, P.J.; Brand, A.N.; Maarse, F.J.; Mulder, L.J.M.
A new psycholinguistically motivated and neural network base model of human word recognition is presented. In contrast to earlier models it uses real speech as input. At the word layer acoustical and temporal information is stored by sequences of connected sensory neurons that pass on sensor
Full Text Available From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face. Another group experienced the words in the visual modality (seeing a silent talking face. Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.
Ernestus, Mirjam; Mak, Willem Marinus
This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the…
Marchman, Virginia A; Fernald, Anne; Hurtado, Nereyda
Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.
Starting with a 1890 essay by Freud, the author goes in search of an interpersonal psychology native to Freud's psychoanalytic method and to in psychoanalysis and the interpersonal method in psychiatry. This derives from the basic interpersonal nature of the human situation in the lives of individuals and social groups. Psychiatry, the healing of the soul, and psychotherapy, therapy of the soul, are examined from the perspective of the communication model, based on the essential interpersonal function of language and the spoken word: persons addressing speeches to themselves and to others in relations, between family members, others in society, and the professionals who serve them. The communicational model is also applied in examining psychiatric disorders and psychiatric diagnoses, as well as psychodynamic formulas, which leads to a reformulation of the psychoanalytic therapy as a process. A plea is entered to define psychoanalysis as an interpersonal discipline, in analogy to Sullivan's interpersonal psychiatry.
Li, Man; DeKeyser, Robert
This study examined the differential effects of systematic perception and production practice and the role of musical ability in learning Mandarin tone-words by native English-speaking adults in a training study. In this study, all participants (N = 38; 19 for each practice group) were first taught declarative knowledge of Mandarin tones and of…
Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M
When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.
Piai, V.; Roelofs, A.P.A.; Jensen, O.; Schoffelen, J.M.; Bonnefond, M.
According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography
Brunellière, Angèle; Sánchez-García, Carolina; Ikumi, Nara; Soto-Faraco, Salvador
Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. Copyright © 2013 Elsevier B.V. All rights reserved.
Full Text Available According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog with distractor words. The distractor and picture name were semantically related (cat, unrelated (pin, or identical (dog. Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350-650 ms (4-10 Hz in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.
Full Text Available Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I. Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II. For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action ("grasp" and "put down" or unrelated words (e.g. "butterfly" and "pigeon". Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action language and motor structures are significant to the point that language can refine overt motor behaviour.
Blumenfeld, Henrike K.; Marian, Viorica
Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842
Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene
Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.
Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.
Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…
Full Text Available It is commonly thought that phonological learning is different in young children compared to adults, possibly due to the speech processing system not yet having reached full native-language specialization. However, the neurocognitive mechanisms of phonological learning in children are poorly understood. We employed magnetoencephalography (MEG to track cortical correlates of incidental learning of meaningless word forms over two days as 6-8-year-olds overtly repeated them. Native (Finnish pseudowords were compared with words of foreign sound structure (Korean to investigate whether the cortical learning effects would be more dependent on previous proficiency in the language rather than maturational factors. Half of the items were encountered four times on the first day and once more on the following day. Incidental learning of these recurring word forms manifested as improved repetition accuracy and a correlated reduction of activation in the right superior temporal cortex, similarly for both languages and on both experimental days, and in contrast to a salient left-hemisphere emphasis previously reported in adults. We propose that children, when learning new word forms in either native or foreign language, are not yet constrained by left-hemispheric segmental processing and established sublexical native-language representations. Instead, they may rely more on supra-segmental contours and prosody.
Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared. and they manually responded to arrows presented away from (Experiment
Paquette-Smith, Melissa; Fecher, Natalie; Johnson, Elizabeth K
Sensitivity to noncontrastive subphonemic detail plays an important role in adult speech processing, but little is known about children's use of this information during online word recognition. In two eye-tracking experiments, we investigate 2-year-olds' sensitivity to a specific type of subphonemic detail: coarticulatory mismatch. In Experiment 1, toddlers viewed images of familiar objects (e.g., a boat and a book) while hearing labels containing appropriate or inappropriate coarticulation. Inappropriate coarticulation was created by cross-splicing the coda of the target word onto the onset of another word that shared the same onset and nucleus (e.g., to create boat, the final consonant of boat was cross-spliced onto the initial CV of bone). We tested 24-month-olds and 29-month-olds in this paradigm. Both age groups behaved similarly, readily detecting the inappropriate coarticulation (i.e., showing better recognition of identity-spliced than cross-spliced items). In Experiment 2, we asked how children's sensitivity to subphonemic mismatch compared to their sensitivity to phonemic mismatch. Twenty-nine-month-olds were presented with targets that contained either a phonemic (e.g., the final consonant of boat was spliced onto the initial CV of bait) or a subphonemic mismatch (e.g., the final consonant of boat was spliced onto the initial CV of bone). Here, the subphonemic (coarticulatory) mismatch was not nearly as disruptive to children's word recognition as a phonemic mismatch. Taken together, our findings support the view that 2-year-olds, like adults, use subphonemic information to optimize online word recognition.
Full Text Available Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI; vocal colour naming while ignoring distractors (Stroop; and manual object discrimination while ignoring spatial position (Simon task. All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus. Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category relative to incongruent (categorically related and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the anterior cingulate cortex, a region that is likely implementing domain
Gautreau, Aurore; Hoen, Michel; Meunier, Fanny
This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.
Full Text Available Rapid vocabulary learning in children has been attributed to "fast mapping", with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion.
Griebel, Ulrike; Oller, D. Kimbrough
Rapid vocabulary learning in children has been attributed to “fast mapping”, with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico) not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey) with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before) embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion. PMID:22363421
Rogers, Jack C; Davis, Matthew H
Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discussed.
Gwilliams, L; Marantz, A
Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Ueno, Taiji; Saito, Satoru
Caplan and colleagues have recently explained paired-associate learning and serial-order learning with a single-mechanism computational model by assuming differential degrees of isolation. Specifically, two items in a pair can be grouped together and associated to positional codes that are somewhat isolated from the rest of the items. In contrast, the degree of isolation among the studied items is lower in serial-order learning. One of the key predictions drawn from this theory is that any variables that help chunking of two adjacent items into a group should be beneficial to paired-associate learning, more than serial-order learning. To test this idea, the role of visual representations in memory for spoken verbal materials (i.e., imagery) was compared between two types of learning directly. Experiment 1 showed stronger effects of word concreteness and of concurrent presentation of irrelevant visual stimuli (dynamic visual noise: DVN) in paired-associate memory than in serial-order memory, consistent with the prediction. Experiment 2 revealed that the irrelevant visual stimuli effect was boosted when the participants had to actively maintain the information within working memory, rather than feed it to long-term memory for subsequent recall, due to cue overloading. This indicates that the sensory input from irrelevant visual stimuli can reach and affect visual representations of verbal items within working memory, and that this disruption can be attenuated when the information within working memory can be efficiently supported by long-term memory for subsequent recall.
Lew-Williams, Casey; Fernald, Anne
All nouns in Spanish have grammatical gender, with obligatory gender marking on preceding articles (e.g., la and el, the feminine and masculine forms of "the," respectively). Adult native speakers of languages with grammatical gender exploit this cue in on-line sentence interpretation. In a study investigating the early development of this ability, Spanish-learning children (34-42 months) were tested in an eye-tracking procedure. Presented with pairs of pictures with names of either the same grammatical gender (la pelota, "ball [feminine]"; la galleta, "cookie [feminine]") or different grammatical gender (la pelota; el zapato, "shoe [masculine]"), they heard sentences referring to one picture (Encuentra la pelota, "Find the ball"). The children were faster to orient to the referent on different-gender trials, when the article was potentially informative, than on same-gender trials, when it was not, and this ability was correlated with productive measures of lexical and grammatical competence. Spanish-learning children who can speak only 500 words already use gender-marked articles in establishing reference, a processing advantage characteristic of native Spanish-speaking adults.
The study investigated the role of word-level and verbal skills in writing quality of learners who spoke English as a first (L1) and second (L2) language. One hundred and sixty-eight L1 and L2 learners (M = 115.38 months, SD = 3.57 months) participated in the study. All testing was conducted in English. There was a statistically significant L1…
Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie
Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…
Prior, Anat; MacWhinney, Brian; Kroll, Judith F.
We present a set of translation norms for 670 English and 760 Spanish nouns, verbs and class ambiguous items that varied in their lexical properties in both languages, collected from 80 bilingual participants. Half of the words in each language received more than a single translation across participants. Cue word frequency and imageability were both negatively correlated with number of translations. Word class predicted number of translations: Nouns had fewer translations than did verbs, which had fewer translations than class-ambiguous items. The translation probability of specific responses was positively correlated with target word frequency and imageability, and with its form overlap with the cue word. Translation choice was modulated by L2 proficiency: Less proficient bilinguals tended to produce lower probability translations than more proficient bilinguals, but only in forward translation, from L1 to L2. These findings highlight the importance of translation ambiguity as a factor influencing bilingual representation and performance. The norms can also provide an important resource to assist researchers in the selection of experimental materials for studies of bilingual and monolingual language performance. These norms may be downloaded from www.psychonomic.org/archive. PMID:18183923
Full Text Available Reaction time data have long been collected in order to gain insight into the underlying mechanisms involved in language processing. Means analyses often attempt to break down what factors relate to what portion of the total reaction time. From a dynamic systems theory perspective or an interaction dominant view of language processing, it is impossible to isolate discrete factors contributing to language processing, since these continually and interactively play a role. Non-linear analyses offer the tools to investigate the underlying process of language use in time, without having to isolate discrete factors. Patterns of variability in reaction time data may disclose the relative contribution of automatic (grapheme-to-phoneme conversion processing and attention-demanding (semantic processing. The presence of a fractal structure in the variability of a reaction time series indicates automaticity in the mental structures contributing to a task. A decorrelated pattern of variability will indicate a higher degree of attention-demanding processing. A focus on variability patterns allows us to examine the relative contribution of automatic and attention-demanding processing when a speaker is using the mother tongue (L1 or a second language (L2. A word naming task conducted in the L1 (Dutch and L2 (English shows L1 word processing to rely more on automatic spelling-to-sound conversion than L2 word processing. A word naming task with a semantic categorization subtask showed more reliance on attention-demanding semantic processing when using the L2. A comparison to L1 English data shows this was not only due to the amount of language use or language dominance, but also to the difference in orthographic depth between Dutch and English. An important implication of this finding is that when the same task is used to test and compare different languages, one cannot straightforwardly assume the same cognitive sub processes are involved to an equal degree
Plat, Rika; Lowie, Wander; de Bot, Kees
Reaction time data have long been collected in order to gain insight into the underlying mechanisms involved in language processing. Means analyses often attempt to break down what factors relate to what portion of the total reaction time. From a dynamic systems theory perspective or an interaction dominant view of language processing, it is impossible to isolate discrete factors contributing to language processing, since these continually and interactively play a role. Non-linear analyses offer the tools to investigate the underlying process of language use in time, without having to isolate discrete factors. Patterns of variability in reaction time data may disclose the relative contribution of automatic (grapheme-to-phoneme conversion) processing and attention-demanding (semantic) processing. The presence of a fractal structure in the variability of a reaction time series indicates automaticity in the mental structures contributing to a task. A decorrelated pattern of variability will indicate a higher degree of attention-demanding processing. A focus on variability patterns allows us to examine the relative contribution of automatic and attention-demanding processing when a speaker is using the mother tongue (L1) or a second language (L2). A word naming task conducted in the L1 (Dutch) and L2 (English) shows L1 word processing to rely more on automatic spelling-to-sound conversion than L2 word processing. A word naming task with a semantic categorization subtask showed more reliance on attention-demanding semantic processing when using the L2. A comparison to L1 English data shows this was not only due to the amount of language use or language dominance, but also to the difference in orthographic depth between Dutch and English. An important implication of this finding is that when the same task is used to test and compare different languages, one cannot straightforwardly assume the same cognitive sub processes are involved to an equal degree using the same
Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun
The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.
Kerz, E.; Wiechmann, D.; Mitkov, R.
There is an accumulating body of evidence that knowledge of the statistics of multiword phrases (MWP) facilitates native language learning and processing both in children and adults. However, less is known about whether adult second language (L2) learners are able to develop native-like sensitivity
"Decoding"--converting the written symbols (or graphemes) of an alphabetical writing system into the sounds (or phonemes) they represent, using knowledge of the language's symbol/sound correspondences--has been argued to be an important but neglected skill in the teaching of second language (L2) French in English secondary schools.…
We conducted three experiments investigating in more detail the interaction between the two effects of bilingualism and L1-L2 similarity in the speech performance of balanced and unbalanced bilinguals. In Experiment 1, L1 Mandarin monolinguals and two groups of Hakka and Minnan balanced bilinguals (Hakka: more similar to Mandarin) performed a non-contextual single-character reading task in Mandarin, which required more inhibitory control. The two bilingual groups outperformed the monolinguals, regardless of their L1 background. However, the bilingual advantage was not found in a contextual multi-word task (Experiment 2), but instead the effect of cross-linguistic similarity emerged. Furthermore, in Experiment 3, the Hakka unbalanced bilinguals showed an advantage in the non-contextual task, while their Minnan counterparts did not, and the impact of L1-L2 similarity emerged in both tasks. These results unveiled the way the two effects dynamically interplayed depending on the task contexts and the relative degrees of using L1 and L2.
This study compared the reading and oral language skills of children who speak English as a first (L1) and second language (L2), and examined whether the strength of the relationship between word reading, oral language, and reading comprehension was invariant (equivalent) across the two groups. The participants included 183 L1 and L2 children…
Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.
Appel, Randy; Wood, David
The correct use of frequently occurring word combinations represents an important part of language proficiency in spoken and written discourse. This study investigates the use of English-language recurrent word combinations in low-level and high-level L2 English academic essays sourced from the Canadian Academic English Language (CAEL) assessment.…
Feghali, Maksoud N.
This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…
Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.
Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We
Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T
Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We
Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.
Alt, Mary; Gutmann, Michelle L
This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.
Nicodemus, Brenda; Emmorey, Karen
Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…
Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.
Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson-Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semifixed multi-word units (MWUs),
Dictionary studies have suggested that nearly half of the English lexicon have multiple meanings. It is not yet clear, however, if second language learners reading English texts will encounter words with multiple meanings to the same degree. This study investigates the use of words with multiple meanings in an authentic English novel. Two samples…
Mishra, Ramesh Kumar; Singh, Niharika
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
Mendikoetxea, Amaya; Lozano, Cristóbal
This paper shows the need to triangulate different approaches in Bilingualism and Second Language Acquisition (SLA) research to fully understand late bilinguals' interlanguage grammars. Methodologically, we show how experimental and corpus data can be (and should be) triangulated by reporting on a corpus study (Lozano and Mendikoetxea in Biling Lang Cognit 13(4):475-497, 2010) and a new follow-up offline experiment investigating Subject-Verb inversion (Subject-Verb/Verb-Subject order) in L1 Spanish-L2 English (n = 417). Theoretically, we follow a recent line in psycholinguistic approaches to Bilingualism and SLA research (Interface Hypothesis, Sorace in Linguist Approaches Biling 1(1):1-33, 2011). It focuses on the interface between syntax and language-external modules of the mind/brain (syntax-discourse [end-focus principle] and syntax-phonology [end-weight principle]) as well as a language-internal interface (lexicon-syntax [unaccusative hypothesis]). We argue that it is precisely this multi-faceted interface approach (corpus and experimental data, core syntax and the interfaces, representational and processing models) that provides a deeper understanding of (i) the factors that favour inversion in L2 acquisition in particular and (ii) interlanguage grammars in general.
Gow, David W; Olson, Bruna B
Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.
Parisse , Christophe; Le Normand , Marie-Thérèse
International audience; The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automa...
Does the spelling of a word mandatorily constrain spoken word production, or does it do so only when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling effects in spoken word production in English using a prompt–response word generation task. Preparation
Lynne E Bernstein
Full Text Available Training with audiovisual (AO speech can promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. Pre-/perilingually deafened adults rely on visual speech even when they also use a cochlear implant. This study investigated whether visual speech promotes auditory perceptual learning in these cochlear implant users. In Experiment 1, 28 prelingually deafened adults with late-acquired cochlear implants were assigned to learn paired associations between spoken disyllabic C(=consonantV(=vowelCVC nonsense words and nonsense pictures (fribbles, under AV and then under auditory-only (AO (or counter-balanced AO then AV training conditions. After training on each list of paired-associates (PA, testing was carried out AO. Across AV and AO training, AO PA test scores improved as did identification of consonants in untrained CVCVC stimuli. However, whenever PA training was carried out with AV stimuli, AO test scores were steeply reduced. Experiment 2 repeated the experiment with 43 normal-hearing adults. Their AO tests scores did not drop following AV PA training and even increased relative to scores following AO training. Normal-hearing participants' consonant identification scores improved also but with a pattern that contrasted with cochlear implant users’: Normal hearing adults were most accurate for medial consonants, and in contrast cochlear implant users were most accurate for initial consonants. The results are interpreted within a multisensory reverse hierarchy theory, which predicts that perceptual tasks are carried out whenever possible based on immediate high-level perception without scrutiny of lower-level features. The theory implies that, based on their bias towards visual speech, cochlear implant participants learned the PAs with greater reliance on vision to the detriment of auditory perceptual learning. Normal-hearing participants' learning took advantage of the concurrence between auditory and visual
Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides
extent of the emphasis on the acquisition vocabulary in school curricula. After a brief introduction, the author looks in chapter 2 at major books which in the. 20th century worked on a controlled vocabulary for foreign-language learners in Europe, Asia and America. This section provides the background for the elaboration of ...
PL2 production of english word-final consonants: the role of orthography and learner profile variables Produção de consoantes finais do inglês como L2: o papel da ortografia e de variáveis relacionadas ao perfil do aprendiz
Full Text Available The present study investigates some factors affecting the acquisition of second language (L2 phonology by learners with considerable exposure to the target language in an L2 context. More specifically, the purpose of the study is two-fold: (a to investigate the extent to which participants resort to phonological processes resulting from the transfer of L1 sound-spelling correspondence into the L2 when pronouncing English word-final consonants; and (b to examine the relationship between rate of transfer and learner profile factors, namely proficiency level, length of residence in the L2 country, age of arrival in the L2 country, education, chronological age, use of English with native speakers, attendance in EFL courses, and formal education. The investigation involved 31 Brazilian speakers living in the United States with diverse profiles. Data were collected using a questionnaire to elicit the participants' profiles, a sentence-reading test (pronunciation measure, and an oral picture-description test (L2 proficiency measure. The results indicate that even in an L2 context, the transfer of L1 sound-spelling correspondence to the production of L2 word-final consonants is frequent. The findings also reveal that extensive exposure to rich L2 input leads to the development of proficiency and improves production of L2 word-final consonants.O presente estudo examina fatores que afetam a produção de consoantes em segunda língua (L2 por aprendizes que foram consideravelmente expostos à língua-alvo em um contexto de L2. Um dos objetivos do presente estudo foi investigar com que frequência os participantes utilizam processos fonológicos que resultam da transferência da correspondência entre ortografia e som da língua materna (L1 para a L2, quando produzem consoantes da língua inglesa em posição de final de palavra. O segundo objetivo consistiu em examinar o relacionamento entre índice de transferência grafo-fonológica e fatores ligados ao
Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.
ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…
Lipski, John M.
The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)
Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…
DeWitt, Iain D. J.
Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…
Language competence in various communicative activities in L2 largely depends on the learners' size of vocabulary. The target vocabulary of adult L2 learners should be between 2,000 high frequency words (a critical threshold) and 10,000 word families (for comprehension of university texts). For a TOEIC test, the threshold is estimated to be…
Norman, Tal; Degani, Tamar; Peleg, Orna
The present study examined visual word recognition processes in Hebrew (a Semitic language) among beginning learners whose first language (L1) was either Semitic (Arabic) or Indo-European (e.g. English). To examine if learners, like native Hebrew speakers, exhibit morphological sensitivity to root and word-pattern morphemes, learners made an…
Roč. 68, č. 2 (2017), s. 229-237 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : spoken languge * spoken corpus * tag question * responze word Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf
Yoneyama, Kiyoko; Munson, Benjamin
Whether or not the influence of listeners' language proficiency on L2 speech recognition was affected by the structure of the lexicon was examined. This specific experiment examined the effect of word frequency (WF) and phonological neighborhood density (PND) on word recognition in native speakers of English and second-language (L2) speakers of English whose first language was Japanese. The stimuli included English words produced by a native speaker of English and English words produced by a native speaker of Japanese (i.e., with Japanese-accented English). The experiment was inspired by the finding of Imai, Flege, and Walley [(2005). J. Acoust. Soc. Am. 117, 896-907] that the influence of talker accent on speech intelligibility for L2 learners of English whose L1 is Spanish varies as a function of words' PND. In the currently study, significant interactions between stimulus accentedness and listener group on the accuracy and speed of spoken word recognition were found, as were significant effects of PND and WF on word-recognition accuracy. However, no significant three-way interaction among stimulus talker, listener group, and PND on either measure was found. Results are discussed in light of recent findings on cross-linguistic differences in the nature of the effects of PND on L2 phonological and lexical processing.
Full Text Available The article reports on the findings of an empirical study of the use of repeats – as one of the markers of disfluency – in advanced learner English and contributes to the study of L2 fluency. An analysis of 13 hours of recordings of interviews with 50 advanced learners of English with Czech as L1 revealed 1,905 instances of repeats which mainly (78% consisted of one-word repeats occurring at the beginning of clauses and constituents. Two-word repeats were less frequent (19% but appeared in the same positions within the utterances. Longer repeats are much rarer (<2.5%. A comparison with available analyses show that Czech advanced learners of English use repeats in a similar way as advanced learners of English with a different L1 and also as native speakers. If repeats are accepted as fluencemes, i.e. components contributing to fluency, it would appear clear that many advanced learners either successfully adopt this native-like strategy either as a result of exposure to native speech or as transfer from their L1s. Whilst a question remains whether such fluency enhancing strategies ought to become part of L2 instruction, it is argued that spoken learner corpora also ought to include samples of the learners’ L1 production.
Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...
Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.
Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.
resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within
Prior, Anat; Goldina, Anna; Shany, Michal; Geva, Esther; Katzir, Tami
The current study examined the predictive roles of L2 vocabulary knowledge and L2 word reading skills in explaining individual differences in lexical inferencing in the L2. Participants were 53 Israeli high school students who emigrated from the former Soviet Union, and spoke Russian as an L1 and Hebrew as an L2. L2 vocabulary knowledge and…
Siyanova, Anna; Schmitt, Norbert
One of the choices available in English is between one-word verbs (train at the gym) and their multi-word counterparts (work out at the gym). Multi-word verbs tend to be colloquial in tone and are a particular feature of informal spoken discourse. Previous research suggests that English learners often have problems with multi-word verbs, and may…
Swerts, Marc; Zerbian, Sabine
Previous studies have shown that characteristics of a person's first language (L1) may transfer to a second language (L2). The current study looks at the extent to which this holds for aspects of intonation as well. More specifically, we investigate to what extent traces of the L1 can be discerned in the way intonation is used in the L2 for two functions: (1) to highlight certain words by making them sound more prominent and (2) to signal continuation or finality in a list by manipulating the speech melody. To this end, the article presents an explorative study into the way focus and boundaries are marked prosodically in Zulu, and it also compares such prosodic functions in two variants of English in South Africa, i.e., English spoken as an L1, and English spoken as an L2/additional language by speakers who have Zulu as their L1. The latter language is commonly referred to as Black South African English. This comparison is interesting from a typological perspective, as Zulu is intonationally different from English, especially in the way prosody is exploited for signalling informationally important stretches of speech. Using a specific elicitation procedure, we found in a first study that speakers of South African English (as L1) mark focused words and position within a list by intonational means, just as in other L1 varieties of English, whereas Zulu only uses intonation for marking continuity or finality. A second study focused on speakers of Black South African English, and compared the prosody of proficient versus less proficient speakers. We found that the proficient speakers were perceptually equivalent to L1 speakers of English in their use of intonation for marking focus and boundaries. The less proficient speakers marked boundaries in a similar way as L1 speakers of English, but did not use prosody for signalling focus, analogous to what is typical of their native language. Acoustic observations match these perceptual results. Copyright © 2010 S. Karger AG
Harrington, Mike; Sawyer, Mark
Examines the sensitivity of second-language (L2) working memory (ability to store and process information simultaneously) to differences in reading skills among advanced L2 learners. Subjects with larger L2 working memory capacities scored higher on measures of L2 reading skills, but no correlation was found between reading and passive short-term…
Discussion: Despite the fact that the spoken-Persian language has no strict word order, Persian-speaking children tend to use other logically possible orders of subject (S, verb (V, and object (O lesser than the SOV structure.
Cheng, Junyu; Matthews, Joshua
This study explores the constructs that underpin three different measures of vocabulary knowledge and investigates the degree to which these three measures correlate with, and are able to predict, measures of second language (L2) listening and reading. Word frequency structured vocabulary tests tapping "receptive/orthographic (RecOrth)…
Chetail, Fabienne; Content, Alain
Syllabification of spoken words has been largely used to define syllabic properties of written words, such as the number of syllables or syllabic boundaries. By contrast, some authors proposed that the functional structure of written words stems from visuo-orthographic features rather than from the transposition of phonological structure into the…
Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...
Hayes-Harb, Rachel; Masuda, Kyoko
There is much interest among psychologists and linguists in the influence of the native language sound system on the acquisition of second languages (Best, 1995; Flege, 1995). Most studies of second language (L2) speech focus on how learners perceive and produce L2 sounds, but we know of only two that have considered how novel sound contrasts are encoded in learners' lexical representations of L2 words (Pallier et al., 2001; Ota et al., 2002). In this study we investigated how native speakers of English encode Japanese consonant quantity contrasts in their developing Japanese lexicons at different stages of acquisition (Japanese contrasts singleton versus geminate consonants but English does not). Monolingual English speakers, native English speakers learning Japanese for one year, and native speakers of Japanese were taught a set of Japanese nonwords containing singleton and geminate consonants. Subjects then performed memory tasks eliciting perception and production data to determine whether they encoded the Japanese consonant quantity contrast lexically. Overall accuracy in these tasks was a function of Japanese language experience, and acoustic analysis of the production data revealed non-native-like patterns of differentiation of singleton and geminate consonants among the L2 learners of Japanese. Implications for theories of L2 speech are discussed.
I Nengah Sudipa
Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.
In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and accurate use. Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...
Thais Cristófaro Silva
Full Text Available This article discusses the appropriation of rhotics in English-L2 spoken by Brazilian speakers. he results show that rhotics are quickly incorporated into the English-L2 speakers’ production. A detailed examination of the data indicates that the following factors are relevant in the appropriation of retrolex approximant in English-L2: proficiency, the individual (learner and the lexical item. he fact that the appropriation of the retrolex approximant quickly achieves excellent levels in English-L2 spoken by Brazilian speakers suggests that teaching of pronunciation is speciic and not global. Based on Multirepresentational Models, it is argued that grammatical knowledge is a dynamic construct, interlaced by various linguistic and non-linguistic factors.
This article discusses the synonymous word "mistake*.The discussion will also cover the meaning of 'word' itself. Words can be considered as form whether spoken or written, or alternatively as composite expression, which combine and meaning. Synonymous are different phonological words which have the same or very similar meanings. The synonyms of mistake are error, fault, blunder, slip, slipup, gaffe and inaccuracy. The data is taken from a computer program. The procedure of data collection is...
Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.
Cornillie, Frederik; Baten, Kristof; De Hertog, Dirk
This paper reports on the potential of Oral Elicited Imitation (OEI) as a format for output practice, building on an analysis of picture-matching and spoken data collected from 36 university-level learners of German as a second language (L2) in a web-based assessment task inspired by Input Processing (VanPatten, 2004). The design and development…
Chen, Wei; Mostow, Jack; Aist, Gregory
Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…
Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf
Bonin, P; Fayol, M; Chalard, M
This study investigates age of acquisition (AoA) and word frequency effects in both spoken and written picture naming. In the first two experiments, reliable AoA effects on object naming speed, with objective word frequency controlled for, were found in both spoken (Experiment 1) and written picture naming (Experiment 2). In contrast, no reliable objective word frequency effects were observed on naming speed, with AoA controlled for, in either spoken (Experiment 3) or written (Experiment 4) picture naming. The implications of the findings for written picture naming are briefly discussed.
Eskildsen, Søren W.; Wagner, Johannes
This study uses conversation analysis (CA) to investigate the coupling of specific linguistic items with specific gestures in second language (L2) learning over time. In particular, we are interested in how gestures accompany learning of new vocabulary. CA-informed studies of gesture have previously shown the importance of embodiment in L2 use and…
Parisse, C; Le Normand, M T
The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automatic disambiguation of lexical tags. The tool presented here (POST) can tag and disambiguate a large text in a few seconds. This tool complements systems dealing with language transcription and suggests further theoretical developments in the assessment of the status of morphosyntax in spoken language corpora. The program currently works for French and English, but it can be easily adapted for use with other languages. The analysis and computation of a corpus produced by normal French children 2-4 years of age, as well as of a sample corpus produced by French SLI children, are given as examples.
The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple
This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…
Lestari, Dessi Puji; Furui, Sadaoki
Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.
Full Text Available This paper addresses the problem of spoken document retrieval under noisy conditions by incorporating sound selection of a basic unit and an output form of a speech recognition system. Syllable fragment is combined with a confusion network in a spoken document retrieval task. After selecting an appropriate syllable fragment, a lattice is converted into a confusion network that is able to minimize the word error rate instead of maximizing the whole sentence recognition rate. A vector space model is adopted in the retrieval task where tf-idf weights are derived from the posterior probability. The confusion network with syllable fragments is able to improve the mean of average precision (MAP score by 0.342 and 0.066 over one-best scheme and the lattice.
Despite the current importance given to L2 vocabulary acquisition in the last two decades, considerable deficiencies are found in L2 students' vocabulary size. One of the aspects that may influence vocabulary learning is word frequency. However, scholars warn that frequency may lead to wrong conclusions if the way words are distributed is ignored.…
Macizo, Pedro; Van Petten, Cyma; O'Rourke, Polly L.
Many multisyllabic words contain shorter words that are not semantic units, like the CAP in HANDICAP and the DURA ("hard") in VERDURA ("vegetable"). The spaces between printed words identify word boundaries, but spurious identification of these embedded words is a potentially greater challenge for spoken language comprehension, a challenge that is…
Peters, E.; Hulstijn, J.H.; Sercu, L.; Lutjeharms, M.
This study investigated three techniques designed to increase the chances that second language (L2) readers look up and learn unfamiliar words during and after reading an L2 text. Participants in the study, 137 college students in Belgium (L1 = Dutch, L2 = German), were randomly assigned to one of
Full Text Available Il contributo presenta i risultati di una ricerca sull’acquisizione dei segnali discorsivi (SD in italiano L2, condotta su un corpus di parlato di studenti universitari in mobilità accademica, presenti nell’università di Firenze e inseriti in percorsi formativi di italiano L2 presso il Centro Linguistico di Ateneo. Il principale obiettivo dello studio è indagare l’italiano di stranieri relativamente a questo tratto specifico al fine di individuare eventuali sequenze acquisizionali, contribuendo a delineare lo sviluppo della competenza sociopragmatica degli apprendenti nei livelli di competenza proposti nel Quadro comune europeo di riferimento per le lingue (QCER Si analizza, in particolare, l’uso dei SD nel parlato dialogico degli informanti nei Livelli basico, indipendente e competente del QCER – con riferimento al modello tassonomico dei SD proposto da Bazzanella nella Grande grammatica italiana di consultazione (1995, integrato, in relazione allo specifico contesto, con nuove funzioni – evidenziando i principali macrofenomeni emersi, con l’intento inoltre di riflettere su come i dati dell’acquisizione dell’italiano L2 possono rappresentare un punto di riferimento per la definizione di percorsi formativi coerenti con i processi naturali di sviluppo della competenza.Acquisition of discourse markers in Italian L2The paper presents the results of a study on the acquisition of discourse markers (SD in Italian L2 students, conducted on a corpus of spoken language by university students at the University of Florence who attended Italian L2 language course at the University Language Center. The main objective of the study was to investigate the Italian of foreigners in relation to this specific trait in order to identify possible acquisitional sequences, helping shape the development of learners' socio-pragmatic competence levels proposed in the Common European Framework of Reference for Languages (CEFR. In particular, we
Lee, Sunjung; Pulido, Diana
This study investigated the impact of topic interest, alongside L2 proficiency and gender, on L2 vocabulary acquisition through reading. A repeated-measures design was used with 135 Korean EFL students. Control variables included topic familiarity, prior target-word knowledge, and target-word difficulty (word length, class, and concreteness).…
Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie
Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…
Ludington, Jason Darryl
Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…
Barnhart, Anthony S.; Goldinger, Stephen D.
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…
Vroomen, J.; de Gelder, B.
Norris, McQueen & Cutler present a detailed account of the decision stage of the phoneme monitoring task. However, we question whether this contributes to our understanding of the speech recognition process itself, and we fail to see why phonotactic knowledge is playing a role in phoneme
Ordelman, Roeland J.F.; de Jong, Franciska M.G.; Huijbregts, M.A.H.; van Leeuwen, David
Abstract—Whereas the growth of storage capacity is in accordance with widely acknowledged predictions, the possibilities to index and access the archives created is lagging behind. This is especially the case in the oral history domain and much of the rich content in these collections runs the risk
John G. eHolden
Full Text Available Pronunciation time probability density and hazard functions from large speeded wordnaming data sets were assessed for empirical patterns consistent with multiplicative andreciprocal feedback dynamics—interaction dominant dynamics. Lognormal and inversepower-law distributions are associated with multiplicative and interdependent dynamicsin many natural systems. Mixtures of lognormal and inverse power-law distributionsoffered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald—alternatives corresponding to additive, superposed, component processes. Theevidence for interaction dominant dynamics suggests fundamental links between theobserved coordinative synergies that support speech production and the shapes ofpronunciation time distributions.
Although humour conceals the cultural exclusion in the data set, the cultural codes in the visual material generalise the non-Western 'other' as either extremely religious or as fundamentally different. Key terms: hegemony, Flemish language textbook, Critical Discourse, Analysis, focus group discussion, representational ...
Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...
Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.
Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…
Sulpizio, S.; McQueen, J.M.
Do listeners use lexical stress at an early stage in word learning? Artificial-lexicon studies have shown that listeners can learn new spoken words easily. These studies used non-words differing in consonants and/or vowels, but not differing only in stress. If listeners use stress information in
This study investigated the effect of task type on incidental L2 vocabulary learning. The different tasks investigated in this study differed in terms of repetition of encounters and task involvement load. In a within-subjects design, 72 Iranian learners of English practised 18 target words in three exercise conditions: three ...
This dissertation on adult second language (L2) learning investigates individual learners’ experiences with listening in Danish as an L2 in everyday situations at work. More specifically, the study explores when international employees, who work at international companies in Denmark with English...... as a corporate language, listen in Danish at work, how they handle these situations, what problems they experience, and why some situations are more difficult to listen in than others. The study makes use of qualitative research methods and theoretical aspects from psycholinguistic approaches as well as socially...
Pu, He; Holcomb, Phillip J; Midgley, Katherine J
Research has shown neural changes following second language (L2) acquisition after weeks or months of instruction. But are such changes detectable even earlier than previously shown? The present study examines the electrophysiological changes underlying the earliest stages of second language vocabulary acquisition by recording event-related potentials (ERPs) within the first week of learning. Adult native English speakers with no previous Spanish experience completed less than four hours of Spanish vocabulary training, with pre- and post-training ERPs recorded to a backward translation task. Results indicate that beginning L2 learners show rapid neural changes following learning, manifested in changes to the N400 - an ERP component sensitive to lexicosemantic processing and degree of L2 proficiency. Specifically, learners in early stages of L2 acquisition show growth in N400 amplitude to L2 words following learning as well as a backward translation N400 priming effect that was absent pre-training. These results were shown within days of minimal L2 training, suggesting that the neural changes captured during adult second language acquisition are more rapid than previously shown. Such findings are consistent with models of early stages of bilingualism in adult learners of L2 ( e.g. Kroll and Stewart's RHM) and reinforce the use of ERP measures to assess L2 learning.
Background The present study compared the neural correlates of an intramodally and a crossmodally acquired second language (L2). Deaf people who had learned their L1, German Sign Language (DGS), and their L2, German, through the visual modality were compared with hearing L2 learners of German and German native speakers. Correct and incorrect German sentences were presented word by word on a computer screen while the electroencephalogram was recorded. At the end of each sentence, the participants judged whether or not the sentence was correct. Two types of violations were realized: Either a semantically implausible noun or a violation of subject-verb number agreement was embedded at a sentence medial position. Results Semantic errors elicited an N400, followed by a late positivity in all groups. In native speakers of German, verb-agreement violations were followed by a left lateralized negativity, which has been associated with an automatic parsing process. We observed a syntax related negativity in both high performing hearing and deaf L2 learners as well. Finally, this negativity was followed by a posteriorly distributed positivity in all three groups. Conclusions Although deaf learners have learned German as an L2 mainly via the visual modality they seem to engage comparable processing mechanisms as hearing L2 learners. Thus, the data underscore the modality transcendence of language. PMID:21612604
Skotara, Nils; Kügow, Monique; Salden, Uta; Hänel-Faulhaber, Barbara; Röder, Brigitte
The present study compared the neural correlates of an intramodally and a crossmodally acquired second language (L2). Deaf people who had learned their L1, German Sign Language (DGS), and their L2, German, through the visual modality were compared with hearing L2 learners of German and German native speakers. Correct and incorrect German sentences were presented word by word on a computer screen while the electroencephalogram was recorded. At the end of each sentence, the participants judged whether or not the sentence was correct. Two types of violations were realized: Either a semantically implausible noun or a violation of subject-verb number agreement was embedded at a sentence medial position. Semantic errors elicited an N400, followed by a late positivity in all groups. In native speakers of German, verb-agreement violations were followed by a left lateralized negativity, which has been associated with an automatic parsing process. We observed a syntax related negativity in both high performing hearing and deaf L2 learners as well. Finally, this negativity was followed by a posteriorly distributed positivity in all three groups. Although deaf learners have learned German as an L2 mainly via the visual modality they seem to engage comparable processing mechanisms as hearing L2 learners. Thus, the data underscore the modality transcendence of language.
We investigated the relations of L2 (i.e., English) oral reading fluency, silent reading fluency, word reading automaticity, oral language skills, and L1 literacy skills (i.e., Spanish) to L2 reading comprehension for Spanish-speaking English language learners in the first grade (N = 150). An analysis was conducted for the entire sample as well as…
Zarei, Abbas Ali; Baftani, Fahimeh Nasiri
To investigate the effects of different techniques of vocabulary portfolio including word map, word wizard, concept wheel, visual thesaurus, and word rose on L2 vocabulary comprehension and production, a sample of 75 female EFL learners of Kish Day Language Institute in Karaj, Iran were selected. They were in five groups and each group received…
Tonzar, Claudio; Lotto, Lorella; Job, Remo
In this study we investigated the effects of two learning methods (picture- or word-mediated learning) and of word status (cognates vs. noncognates) on the vocabulary acquisition of two foreign languages: English and German. We examined children from fourth and eighth grades in a school setting. After a learning phase during which L2 words were…
Campeanu, Sandra; Craik, Fergus I M; Alain, Claude
Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.
Full Text Available Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.
Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo
In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.
The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…
Janssen, Maarten; Visser, A.
In many disciplines, the notion of a word is of central importance. For instance, morphology studies le mot comme tel, pris isol´ement (Mel’ˇcuk, 1993 ). In the philosophy of language the word was often considered to be the primary bearer of meaning. Lexicography has as its fundamental role
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Forteza Fernandez, Rafael Filiberto; Korneeva, Larisa I.
Based on Selinker's hypothesis of five psycholinguistic processes shaping interlanguage (1972), the paper focuses attention on the Russian L2-learners' overreliance on the L1 as the main factor hindering their development. The research problem is, therefore, the high incidence of L1 transfer in the spoken and written English language output of…
Full Text Available Audio description in a class of L2 Italian. A didactic experimentAudio description is an inter-semiotic translation process, converting visuals into spoken language. This translation practice is meant for visually impaired individuals and aims to increase their social inclusion and the availability of suitable media products, such as audio-described movies, for this specific audience. In this contribution, however, we will not focus on the social function of this translation practice, but will explore its potential in the field of foreign language didactics. We will present the results of a didactic experiment, carried out in a class of L2 Italian at Ghent University, in which the students were asked to write an audio description script. The main goal of this exploratory study is to test the validity of audio description as a didactic tool in a class of Italian as a foreign language and to identify the linguistic challenges that emerge for the students during a task of this kind. The results indicate that audio description is certainly a valid didactic tool for an L2 learning environment, since it promotes metalinguistic reflection and consequent awareness of various aspects of the used language, such as morpho-syntactic features (pronouns, prepositions, verbs, lexical aspects (encouraging precision and variety and the (intercultural dimension.
Zhou, Lulin; Duff, Fiona J.; Hulme, Charles
We report a training study that assesses whether teaching the pronunciation and meaning of spoken words improves Chinese children's subsequent attempts to learn to read the words. Teaching the pronunciations of words helps children to learn to read those same words, and teaching the pronunciations and meanings improves learning still further.…
Bonin, Patrick; Laroche, Betty; Perret, Cyril
The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological…
Full Text Available This research is descriptive study of registers found in spoken and written communication. The type of this research is Descriptive Qualitative Research. In this research, the data of the study is register in spoken and written communication that are found in a book entitled "Communicating! Theory and Practice" and from internet. The data can be in the forms of words, phrases and abbreviation. In relation with method of collection data, the writer uses the library method as her instrument. The writer relates it to the study of register in spoken and written communication. The technique of analyzing the data using descriptive method. The types of register in this term will be separated into formal register and informal register, and identify the meaning of register.
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently
Palladino, Paola; Cismondo, Dhebora; Ferrari, Marcella; Ballagamba, Isabella; Cornoldi, Cesare
The present study aimed to investigate L2 spelling skills in Italian children by administering an English word dictation task to 13 children with dyslexia (CD), 13 control children (comparable in age, gender, schooling and IQ) and a group of 10 children with an English learning difficulty, but no L1 learning disorder. Patterns of difficulties were examined for accuracy and type of errors, in spelling dictated short and long words (i.e. disyllables and three syllables). Notably, CD were poor in spelling English words. Furthermore, their errors were mainly related with phonological representation of words, as they made more 'phonologically' implausible errors than controls. In addition, CD errors were more frequent for short than long words. Conversely, the three groups did not differ in the number of plausible ('non-phonological') errors, that is, words that were incorrectly written, but whose reading could correspond to the dictated word via either Italian or English rules. Error analysis also showed syllable position differences in the spelling patterns of CD, children with and English learning difficulty and control children. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Campeanu, Sandra; Craik, Fergus I M; Backer, Kristina C; Alain, Claude
The present study was designed to examine listeners' ability to use voice information incidentally during spoken word recognition. We recorded event-related brain potentials (ERPs) during a continuous recognition paradigm in which participants indicated on each trial whether the spoken word was "new" or "old." Old items were presented at 2, 8 or 16 words following the first presentation. Context congruency was manipulated by having the same word repeated by either the same speaker or a different speaker. The different speaker could share the gender, accent or neither feature with the word presented the first time. Participants' accuracy was greatest when the old word was spoken by the same speaker than by a different speaker. In addition, accuracy decreased with increasing lag. The correct identification of old words was accompanied by an enhanced late positivity over parietal sites, with no difference found between voice congruency conditions. In contrast, an earlier voice reinstatement effect was observed over frontal sites, an index of priming that preceded recollection in this task. Our results provide further evidence that acoustic and semantic information are integrated into a unified trace and that acoustic information facilitates spoken word recollection. Copyright © 2014 Elsevier Ltd. All rights reserved.
This study explores whether emphasizing the phonetic components of "kanji," Chinese characters used in Japanese, facilitates second language (L2) learners' novel character learning. Previous L2 studies on Chinese characters indicate that phonology plays a major part in word identification. However, this view remains controversial,…
Huettig, Falk; Brouwer, Susanne
It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.
Prebianca, Gicele V. V.
This study explores the relationship between lexical access and proficiency level in L2 speech production. Forty-one participants (intermediate and advanced learners of English as a foreign language) performed a lexical access task in L2 which yielded two measures: reaction time (RT) and naming accuracy (NA). The statistical analysis point to a facilitatory effect of semantic related word distractors on L2 picture-naming for the experimental and control conditions in both proficiency groups. ...
Escudero, P.; Hayes-Harb, R.; Mitterer, H.
The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English
Crowe, Kathryn; McLeod, Sharynne
The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…
This study has aimed to investigate language learners' emotional experiences through the lens of L2 future self-guides. To that end, the L2 motivational self system was chosen as the theoretical framework to relate learners' emotions to their L2 selves. However, due to inconsistent results of past research concerning the motivational role of the…
Ma, Tengfei; Chen, Baoguo; Lu, Chunming; Dunlap, Susan
This paper presents an experiment that investigated the effects of L2 proficiency and sentence constraint on semantic processing of unknown L2 words (pseudowords). All participants were Chinese native speakers who learned English as a second language. In the experiment, we used a whole sentence presentation paradigm with a delayed semantic relatedness judgment task. Both higher and lower-proficiency L2 learners could make use of the high-constraint sentence context to judge the meaning of novel pseudowords, and higher-proficiency L2 learners outperformed lower-proficiency L2 learners in all conditions. These results demonstrate that both L2 proficiency and sentence constraint affect subsequent word learning among second language learners. We extended L2 word learning into a sentence context, replicated the sentence constraint effects previously found among native speakers, and found proficiency effects in L2 word learning. Copyright © 2015 Elsevier B.V. All rights reserved.
Sheikh, Naveed A; Titone, Debra
The hypothesis that word representations are emotionally impoverished in a second language (L2) has variable support. However, this hypothesis has only been tested using tasks that present words in isolation or that require laboratory-specific decisions. Here, we recorded eye movements for 34 bilinguals who read sentences in their L2 with no goal other than comprehension, and compared them to 43 first language readers taken from our prior study. Positive words were read more quickly than neutral words in the L2 across first-pass reading time measures. However, this emotional advantage was absent for negative words for the earliest measures. Moreover, negative words but not positive words were influenced by concreteness, frequency and L2 proficiency in a manner similar to neutral words. Taken together, the findings suggest that only negative words are at risk of emotional disembodiment during L2 reading, perhaps because a positivity bias in L2 experiences ensures that positive words are emotionally grounded.
Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.
Full Text Available The article presents a theological message of the last words that Jesus spoke from the height of the cross. Layout content is conveyed in three kinds of Christ’s relations: the words addressed to God the Father; the words addressed to the good people standing by the cross; the so-called declarations that the Master had spoken to anyone but uttered them in general. All these words speak of the Master’s love. They express His full awareness of what is being done and of His decision voluntarily taken. Above all, it is revealed in the Lord’s statements His obedience to the will of God expressed in the inspired words of the Holy Scriptures. Jesus fulfills all the prophecies of the Old Testament by pronounced words and accomplished works that will become content of the New Testament.
The Freiburg monosyllabic test has a word inventory based on the word frequency in written sources from the 19th century, the distribution of which is not even between the test lists. The median distributions of word frequency ranking in contemporary language of nine test lists deviate significantly from the overall median of all 400 monosyllables. Lists 1, 6, 9, 10, and 17 include significantly more very rarely used words; lists 2, 3, 5, and 15, include significantly more very frequently used words. Compared with the word frequency in the contemporary spoken German language, about 45 % of the test words are practically no longer used. Due to this high proportion of extremely rarely or no longer used words, the word inventory is no longer representative of the contemporary German language-neither for the written, nor for the spoken language. Highly educated persons with a large vocabulary are thereby favored. The reference values for normal-hearing persons should therefore be reevaluated.
This study investigates measures for second language (L2) writing development. A T-unit, which has been found the most satisfactory unit of analysis for measuring L2 development in English, is extended to measure L2 Chinese writing development through a cross-sectional design in this study. Data were collected from three L2 Chinese learner groups…
Dufour, Sophie; Brunellière, Angèle; Frauenfelder, Ulrich H
Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to reflect mechanisms involved in word identification, was also examined. The ERP data showed a clear frequency effect as early as 350 ms from word onset on the P350, followed by a later effect at word offset on the late N400. A neighborhood density effect was also found at an early stage of spoken-word processing on the PMN, and at word offset on the late N400. Overall, our ERP differences for word frequency suggest that frequency affects the core processes of word identification starting from the initial phase of lexical activation and including target word selection. They thus rule out any interpretation of the word frequency effect that is limited to a purely decisional locus after word identification has been completed. Copyright © 2012 Cognitive Science Society, Inc.
Yunisrina Qismullah Yusuf
Full Text Available This study focuses on the semantic prosodies of the word ¡°robot¡± from words that colligates it in data of the spoken form. The data is collected from a lecturer.s talk discussing two topics which are about man and machines in perfect harmony and the effective temperature of workplaces. For annotation, UCREL CLAWS5 Tagset is used, with Tagset C5 to select output style of horizontal. The design of corpus used is by ICE. It reveals that more positive semantic prosodies on the word ¡°robot¡± are presented in the data compared to negative, with 52 occurrences discovered for positive (94,5% and 3 occurrences discovered for negative (5,5%. Words mostly collocated with ¡°robot¡± in the data are service with 8 collocations, machines with 20 collocations, surgical system with 15 collocations and intelligence with 13 collocations.
Šimáčková, Š.; Podlipský, V.J.; Chládková, K.
As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,
Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.
Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…
Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…
Tomohito, HIROMORI; the Japan Society for the Promotion of Science Graduate School Hokkaido University
Recent research in L2 language education has begun to recognize that metacognition plays a significant role in L2 learning processes. These studies have investigated metacognitive awareness of learning strategies and the relationships among perceived strategy use, actual strategy use, and L2 performance. This paper reports a classroom-based, longtitudinal study of the effect of metacognitive strategy instruction on reading comprehension. To achieve the purpose of the study, two groups of EFL ...
This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…
Konopka, Agnieszka E; Meyer, Antje; Forest, Tess A
The leading theories of sentence planning - Hierarchical Incrementality and Linear Incrementality - differ in their assumptions about the coordination of processes that map preverbal information onto language. Previous studies showed that, in native (L1) speakers, this coordination can vary with the ease of executing the message-level and sentence-level processes necessary to plan and produce an utterance. We report the first series of experiments to systematically examine how linguistic experience influences sentence planning in native (L1) speakers (i.e., speakers with life-long experience using the target language) and non-native (L2) speakers (i.e., speakers with less experience using the target language). In all experiments, speakers spontaneously generated one-sentence descriptions of simple events in Dutch (L1) and English (L2). Analyses of eye-movements across early and late time windows (pre- and post-400 ms) compared the extent of early message-level encoding and the onset of linguistic encoding. In Experiment 1, speakers were more likely to engage in extensive message-level encoding and to delay sentence-level encoding when using their L2. Experiments 2-4 selectively facilitated encoding of the preverbal message, encoding of the agent character (i.e., the first content word in active sentences), and encoding of the sentence verb (i.e., the second content word in active sentences) respectively. Experiment 2 showed that there is no delay in the onset of L2 linguistic encoding when speakers are familiar with the events. Experiments 3 and 4 showed that the delay in the onset of L2 linguistic encoding is not due to speakers delaying encoding of the agent, but due to a preference to encode information needed to select a suitable verb early in the formulation process. Overall, speakers prefer to temporally separate message-level from sentence-level encoding and to prioritize encoding of relational information when planning L2 sentences, consistent with
Claudia K. Friedrich
Full Text Available Multiple lexical representations overlapping with the input (cohort neighbors are temporarily activated in the listener’s mental lexicon when speech unfolds in time. Activation for cohort neighbors appears to rapidly decline as soon as there is mismatch with the input. However, it is a matter of debate whether or not they are completely excluded from further processing. We recorded behavioral data and event-related brain potentials (ERPs in auditory-visual word onset priming during a lexical decision task. As primes we used the first two syllables of spoken German words. In a carrier word condition, the primes were extracted from spoken versions of the target words (ano-ANORAK 'anorak'. In a cohort neighbor condition, the primes were taken from words that overlap with the target word up to the second nucleus (ana- taken from ANANAS 'pineapple'. Relative to a control condition, where primes and targets were unrelated, lexical decision responses for cohort neighbors were delayed. This reveals that cohort neighbors are disfavored by the decision processes at the behavioral front end. In contrast, left-anterior ERPs reflected long-lasting facilitated processing of cohort neighbors. We interpret these results as evidence for extended parallel processing of cohort neighbors. That is, in parallel to the preparation and elicitation of delayed lexical decision responses to cohort neighbors, aspects of the processing system appear to keep track of those less efficient candidates.
Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.
Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna
Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in each group achieved benchmarks for the first stage of functional spoken language development, as defined by Tager-Flusberg et al. (J Speech Lang Hear Res, 52: 643-652, 2009). Analyses of moderators of treatment suggest that joint attention moderates response to both treatments, and children with better receptive language pre-treatment do better with the naturalistic method, while those with lower receptive language show better response to the discrete trial treatment. The implications of these findings are discussed.
Jessica Ann Obermeyer
Methods: Ten native English speaking healthy elderly participants between the ages of 50 and 80 were recruited. Exclusionary criteria included neurological disease/injury, history of learning disability, uncorrected hearing or vision impairment, history of drug/alcohol abuse and presence of cognitive decline (based on Cognitive Linguistic Quick Test. Spoken and written discourse was analyzed for micro linguistic measures including total words, percent correct information units (CIUs; Nicholas & Brookshire, 1993 and percent complete utterances (CUs; Edmonds, et al. 2009. CIUs measure relevant and informative words while CUs focus at the sentence level and measure whether a relevant subject and verb and object (if appropriate are present. Results: Analysis was completed using Wilcoxon Rank Sum Test due to small sample size. Preliminary results revealed that healthy elderly people produced significantly more words in spoken retellings than written retellings (p=.000; however, this measure contrasted with %CIUs and %CUs with participants producing significantly higher %CIUs (p=.000 and %CUs (p=.000 in written story retellings than in spoken story retellings. Conclusion: These findings indicate that written retellings, while shorter, contained higher accuracy at both a word (CIU and sentence (CU level. This observation could be related to the ability to revise written text and therefore make it more concise, whereas the nature of speech results in more embellishment and “thinking out loud,” such as comments about the task, associated observations about the story, etc. We plan to run more participants and conduct a main concepts analysis (before conference time to gain more insight into modality differences and implications.
Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.
Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice
These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.
Jacques Melitz; Farid Toubal
We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role...
Schillingmann, Lars; Ernst, Jessica; Keite, Verena; Wrede, Britta; Meyer, Antje S; Belke, Eva
In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool's performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.
Full Text Available Abstract This paper presents a quantitative analysis of the variable use of the subjunctive which constitutes a notable “fragile zone” in the spoken French of advanced L2 learners. A comparative approach is adopted to consider the relative impact of naturalistic and instructed L2 exposure in the case of our learner-participants who were Irish university learners in both a classroom and study abroad context. The findings presented attempt to illuminate the difficulty that use of the subjunctive poses to the learners, whereby their minimal use of this form, irrespective of their context of acquisition, is lexically restricted to the occurrence of falloir in the matrix clause, although the learners do produce other subjunctive-conditioning verbs and conjunctions expressing subordination. The findings are discussed in terms of their pedagogical and acquisition implications.
Shitova, Natalia; Roelofs, Ardi; Schriefers, Herbert; Bastiaansen, M.C.M.; Schoffelen, Jan-Mathijs
The colour-word Stroop task and the picture-word interference task (PWI) have been used extensively to study the functional processes underlying spoken word production. One of the consistent behavioural effects in both tasks is the Stroop-like effect: The reaction time (RT) is longer on incongruent
Shitova, N.; Roelofs, A.P.A.; Schriefers, H.J.; Bastiaansen, M.C.M.; Schoffelen, J.M.
The colour-word Stroop task and the picture-word interference task (PWI) have been used extensively to study the functional processes underlying spoken word production. One of the consistent behavioural effects in both tasks is the Stroop-like effect: The reaction time (RT) is longer on incongruent
Moskovsky, Christo; Assulaimani, Turki; Racheva, Silvia; Harkins, Jean
The research reported in this article explores the relationship between Dörnyei's (2005, 2009) Second Language Motivational Self System (L2MSS) and the L2 proficiency level of Saudi learners of English as a foreign language (EFL). Male and female participants (N = 360) responded to a questionnaire relating to the main components of L2MSS, the…
Roberts, Leah; Siyanova-Chanturia, Anna
Second language (L2) researchers are becoming more interested in both L2 learners' knowledge of the target language and how that knowledge is put to use during real-time language processing. Researchers are therefore beginning to see the importance of combining traditional L2 research methods with those that capture the moment-by-moment…
in dyslexia provide support for a direct route from visual word forms to semantic and articulatory codes. There also seems to be independence in the...experiment. (LaBerge & Samuels, 1974; Rumelhart & McClelland, 1982, 1986). ’-.Exam-es of some of these separate codes include a visual image of the...form of a spoken word ( visual code), pronunciation of the word (phonological code) or the association of related words (semantic codes). Studies of the
National Aeronautics and Space Administration — The ASTER L2 Surface Emissivity is an on-demand product generated using the five thermal infrared (TIR) bands (acquired either during the day or night time) between...
National Aeronautics and Space Administration — The ASTER L2 Surface Kinetic Temperature is an on-demand product generated using the five thermal infrared (TIR) bands (acquired either during the day or night time)...
Shum, Kathy Kar-man; Ho, Connie Suk-Han; Siegel, Linda S.; Au, Terry Kit-fong
Can young students' early reading abilities in their first language (L1) predict later literacy development in a second language (L2)? The cross-language relationships between Chinese (L1) and English (L2) among 87 Hong Kong students were explored in a longitudinal study. Chinese word-reading fluency, Chinese rapid digit naming, and Chinese rhyme…
The analysis of circumstance adverbials in this paper was based on L1 and L2 corpora of student presentations, each of which consisting of approximately 30,000 words. The overall goal of the investigation was to identify specific functions L1 and L2 college students attributed to circumstance adverbials (the most frequently used adverbial class in…
23 Jan 2012 ... Die howeling word tot 'n hoër rang verhef en beloon, terwyl sy vyande gestraf word (Humphreys 1973:217). Humphreys (1973:217) ..... Baldwin, J.G., 1978, Daniel: Tyndale Old Testament commentaries, Inter-Varsity,. Leicester. Bentzen, A., 1952, Daniel: Handbuch zum Alten Testament, 2. Auflage, Mohr ...
Whitford, Veronica; Titone, Debra
We used eye movement measures of paragraph reading to examine how word frequency and word predictability impact first-language (L1) and second-language (L2) word processing in matched bilingual older and younger adults, varying in amount of current L2 experience. Our key findings were threefold. First, across both early- and late-stage reading, word frequency effects were generally larger in older than in younger adults, whereas word predictability effects were generally age-invariant. Second, across both age groups and both reading stages, word frequency effects were larger in the L2 than in the L1, whereas word predictability effects were language-invariant. Third, graded differences in current L2 experience modulated L1 and L2 word processing in younger adults, but had no impact in older adults. Specifically, greater current L2 experience facilitated L2 word processing, but impeded L1 word processing among younger adults only. Taken together, we draw 2 main conclusions. First, bilingual older adults experience changes in word-level processing that are language-non-specific, potentially because lexical accessibility decreases with age. Second, bilingual older adults experience changes in word-level processing that are insensitive to graded differences in current L2 experience, potentially because lexical representations reach a functional ceiling over time. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
This paper addresses the process of transcribing and annotating spontaneous non-native speech with the aim of compiling a training corpus for the development of Computer Assisted Pronunciation Training (CAPT) applications, enhanced with Automatic Speech Recognition (ASR) technology. To better adapt ASR technology to CAPT tools, the recognition…
This article provides an overview of recent literature and research on word classes, focusing in particular on typological approaches to word classification. The cross-linguistic classification of word class systems (or parts-of-speech systems) presented in this article is based on statements found...... in grammatical descriptions of some 50 languages, which together constitute a representative sample of the world’s languages (Hengeveld et al. 2004: 529). It appears that there are both quantitative and qualitative differences between word class systems of individual languages. Whereas some languages employ...... a parts-of-speech system that includes the categories Verb, Noun, Adjective and Adverb, other languages may use only a subset of these four lexical categories. Furthermore, quite a few languages have a major word class whose members cannot be classified in terms of the categories Verb – Noun – Adjective...
Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.
Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R
Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.
Pharao Hansen, Magnus
This article presents data showing that the syntax of the Nahuatl dialect spoken in Hueyapan, Morelos, Mexico has traits of nonconfigurationality: free word order and free pro-drop, with predicate-initial word order being pragmatically neutral. It permits discontinuous noun phrases and has no nat...
Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect.
Patton-Terry, Nicole; Connor, Carol
This study explored the spelling skills of African American second graders who produced African American English (AAE) features in speech. The children (N = 92), who varied in spoken AAE use and word reading skills, were asked to spell words that contained phonological and morphological dialect-sensitive (DS) features that can vary between AAE and…
El Euch, Sonia
Several researchers have suggested that definitional skill explains academic success/failure (Gagne, 2004; Snow, 1987). The words used to investigate definitional skill have all been concrete words given in the first language (L1) and/or the second language (L2) of the participants. This paper reports a study investigating the quality of the…
VanPatten, Bill; Smith, Megan
In this article, we challenge the notion that aptitude--operationalized as grammatical sensitivity as measured by the Words in Sentences section of the Modern Language Aptitude Test--is central to adult second language (L2) acquisition. We present the findings of a study on the acquisition of two properties of Japanese, head-final word order and…
Remington, Robert J.
Leaders within the Information Technology (IT) industry are expressing a general concern that the products used to deliver and manage today's communications network capabilities require far too much effort to learn and to use, even by highly skilled and increasingly scarce support personnel. The usability of network management systems must be significantly improved if they are to deliver the performance and quality of service needed to meet the ever-increasing demand for new Internet-based information and services. Fortunately, recent advances in spoken language (SL) interface technologies show promise for significantly improving the usability of most interactive IT applications, including network management systems. The emerging SL interfaces will allow users to communicate with IT applications through words and phases -- our most familiar form of everyday communication. Recent advancements in SL technologies have resulted in new commercial products that are being operationally deployed at an increasing rate. The present paper describes a project aimed at the application of new SL interface technology for improving the usability of an advanced network management system. It describes several SL interface features that are being incorporated within an existing system with a modern graphical user interface (GUI), including 3-D visualization of network topology and network performance data. The rationale for using these SL interface features to augment existing user interfaces is presented, along with selected task scenarios to provide insight into how a SL interface will simplify the operator's task and enhance overall system usability.
Full Text Available We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.
Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo
This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...
Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G
Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Previous studies have shown that reading is an important source of incidental second language (L2) vocabulary acquisition. However, we still do not have a clear picture of what happens when readers encounter unknown words. Combining offline (vocabulary tests) and online (eye-tracking) measures, the incidental acquisition of vocabulary knowledge…
Pladevall Ballester, Elisabet
The apparent optionality in the use of null and overt pronominal subjects and the apparently free word order or distribution of preverbal and postverbal subjects in Spanish obey a number of discourse-pragmatic constraints which play an important role in Spanish L2 subject development. Although research on subject properties at the syntax-discourse…
The current study addresses an aspect of second language (L2) phonological acquisition that has received little attention to date--namely, the acquisition of allophonic variation as a word boundary cue. The role of subphonemic variation in the segmentation of speech by native speakers has been indisputably demonstrated; however, the acquisition of…
Gluhareva, Daria; Prieto, Pilar
Recent research has shown that beat gestures (hand gestures that co-occur with speech in spontaneous discourse) are temporally integrated with prosodic prominence and that they help word memorization and discourse comprehension. However, little is known about the potential beneficial effects of beat gestures in second language (L2) pronunciation…
This study investigated the role of morphological and contextual information in inferring the meaning of unknown L2 words during reading. Four groups of college-level ESL students, beginning (n?=?34), intermediate (n?=?27), high-intermediate (n?=?21), and advanced (n?=?25), chose the inferred meanings of 20 pseudo compounds (e.g.,…
Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.
The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…
Over the last few decades, task-based language teaching has inspired and propelled much research into how task complexity affects second language (L2) learners’ performance and development. To date, however, the task-based approach has mainly been researched in connection with learners’ oral and written production, while its applicability to L2 reading has largely been unattended to. In addition, only a few studies exist that have examined the effects of glossing on L2 grammatical constructio...
Breining, Bonnie; Nozari, Nazbanou; Rapp, Brenda
Past research has demonstrated interference effects when words are named in the context of multiple items that share a meaning. This interference has been explained within various incremental learning accounts of word production, which propose that each attempt at mapping semantic features to lexical items induces slight but persistent changes that result in cumulative interference. We examined whether similar interference-generating mechanisms operate during the mapping of lexical items to segments by examining the production of words in the context of others that share segments. Previous research has shown that initial-segment overlap amongst a set of target words produces facilitation, not interference. However, this initial-segment facilitation is likely due to strategic preparation, an external factor that may mask underlying interference. In the present study, we applied a novel manipulation in which the segmental overlap across target items was distributed unpredictably across word positions, in order to reduce strategic response preparation. This manipulation led to interference in both spoken (Exp. 1) and written (Exp. 2) production. We suggest that these findings are consistent with a competitive learning mechanism that applies across stages and modalities of word production.
psychopathology which may complicate postoperative adjustment, notably dysmorphophobia.il}-l2 It has been consistently noted,,2,f>-. that many of these patients have a pre-operative psychiatric history (especially of depres- sion), poor self-confidence and self-esteem, and a nega- tive body image. During assessment of ...
This paper synthesizes cross-sectional studies of the effect of proficiency on second language (L2) pragmatics to answer the synthesis question: Does proficiency affect adult learners' pragmatic competence? Findings have revealed an overall positive proficiency effect on pragmatic competence, and in most cases higher proficiency learners have…
Mirarchi, Daniele; Rossi, Roberto; CERN. Geneva. ATS Department
Dumps induced by sudden increase of losses in the half-cell 16L2 have been a serious machine limitation during the 2017 run. The aim of this MD was to perform local aperture measurements in order to assess diﬀerences after the beam screen regeneration, compared to ﬁrst measurements in 2017.
Sauro, Shannon; Smith, Bryan
This study examines the linguistic complexity and lexical diversity of both overt and covert L2 output produced during synchronous written computer-mediated communication, also referred to as chat. Video enhanced chatscripts produced by university learners of German (N = 23) engaged in dyadic task-based chat interaction were coded and analyzed for…
Husby, Olaf; Koreman, Jacques; Martínez-Paricio, Violeta; Abrahamsen, Jardar E.; Albertsen, Egil; Hedayatfar, Keivan; Bech, Øyvind
The pronunciation of a second or foreign language is often very challenging for L2 learners. It is difficult to address this topic in the classroom, because learners with different native languages (L1s) can have very different challenges. We have therefore developed a Computer-Assisted Listening and Speaking Tutor (CALST), which selectively…
Two experiments examined the hypothesis that L1 phonological awareness plays a role in children's ability to extract morphological patterns of English as L2 from the auditory input. In Experiment 1, 84 Chinese-speaking third graders were tested on whether they extracted the alternation pattern between the base and the derived form (e.g., inflate - inflation) from multiple exposures. Experiment 2 further assessed children's ability to use morphological cues for syntactic categorization through exposures to novel morphologically varying forms (e.g., lutate vs. lutant) presented in the corresponding sentential positions (noun vs. verb). The third-grade EFL learners revealed emergent sensitivity to the morphological cues in the input but failed in fully processing intraword variations. The learners with poorer L1 PA were likely to encounter difficulties in identifying morphological alternation rules and in discovering the syntactic properties of L2 morphology. In addition to L1 PA, L2 vocabulary knowledge also contributed significantly to L2 morphological learning.
Full Text Available One of the key issues in bilingual lexical representation is whether L1 processing is facilitated by L2 words. In this study, we conducted two experiments using the masked priming paradigm to examine how L2-L1 translation priming effects emerge when unbalanced, low proficiency, Korean-English bilinguals performed a lexical decision task. In Experiment 1, we used a 150 ms SOA (50 ms prime duration followed by a blank interval of 100 ms and found a significant L2-L1 translation priming effect. In contrast, in Experiment 2, we used a 60 ms SOA (50 ms prime duration followed by a blank interval of 10 ms and found a null effect of L2-L1 translation priming. This finding is the first demonstration of a significant L2-L1 translation priming effect with unbalanced Korean-English bilinguals. Implications of this work are discussed with regard to bilingual word recognition models.
Rogers, Chad S; Wingfield, Arthur
Older adults' normally adaptive use of semantic context to aid in word recognition can have a negative consequence of causing misrecognitions, especially when the word actually spoken sounds similar to a word that more closely fits the context. Word-pairs were presented to young and older adults, with the second word of the pair masked by multi-talker babble varying in signal-to-noise ratio. Results confirmed older adults' greater tendency to misidentify words based on their semantic context compared to the young adults, and to do so with a higher level of confidence. This age difference was unaffected by differences in the relative level of acoustic masking.
Alt, Mary; Hogan, Tiffany; Green, Samuel; Gray, Shelley; Cabbage, Kathryn; Cowan, Nelson
The purpose of this study is to investigate word learning in children with dyslexia to ascertain their strengths and weaknesses during the configuration stage of word learning. Children with typical development (N = 116) and dyslexia (N = 68) participated in computer-based word learning games that assessed word learning in 4 sets of games that manipulated phonological or visuospatial demands. All children were monolingual English-speaking 2nd graders without oral language impairment. The word learning games measured children's ability to link novel names with novel objects, to make decisions about the accuracy of those names and objects, to recognize the semantic features of the objects, and to produce the names of the novel words. Accuracy data were analyzed using analyses of covariance with nonverbal intelligence scores as a covariate. Word learning deficits were evident for children with dyslexia across every type of manipulation and on 3 of 5 tasks, but not for every combination of task/manipulation. Deficits were more common when task demands taxed phonology. Visuospatial manipulations led to both disadvantages and advantages for children with dyslexia. Children with dyslexia evidence spoken word learning deficits, but their performance is highly dependent on manipulations and task demand, suggesting a processing trade-off between visuospatial and phonological demands.
Full Text Available The value of waveform displays as visual feedback was explored in a training study involving perception and production of L2 Japanese by beginning-level L1 English learners. A pretest-posttest design compared auditory-visual (AV and auditory-only (A-only Web-based training. Stimuli were singleton and geminate /t,k,s/ followed by /a,u/ in two conditions (isolated words, carrier sentences. Fillers with long vowels were included. Participants completed a forced-choice identification task involving minimal triplets: singletons, geminates, long vowels (e.g., sasu, sassu, saasu. Results revealed a significant improvement in geminate identification following training, especially for AV; b significant effect of geminate (lowest scores for /s/; c no significant effect of condition; and d no significant improvement for the control group. Most errors were misperceptions of geminates as long vowels. Test of generalization revealed 5% decline in accuracy for AV and 14% for A-only. Geminate production improved significantly (especially for AV based on rater judgments; improvement was greatest for /k/ and smallest for /s/. Most production errors involved substitution of a singleton for a geminate. Post-study interviews produced positive comments on Web-based training. Waveforms increased awareness of durational differences. Results support the effectiveness of auditory-visual input in L2 perception training with transfer to novel stimuli and improved production.
The comparatively small vowel inventory of Bantu languages leads young Bantu learners to produce "undifferentiations," so that, for example, the spoken forms of "hat,""hut,""heart" and "hurt" sound the same to a British ear. The two criteria for a non-native speaker's spoken performance are…
Carter, Ronald; McCarthy, Michael
This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…
Inegbeboh, Bridget O.
Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…
Assessing spoken-language educational interpreting: Measuring up and measuring right. Lenelle Foster, Adriaan Cupido. Abstract. This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at ...
Spoken language corpora for the nine official African languages of South Africa. Jens Allwood, AP Hendrikse. Abstract. In this paper we give an outline of a corpus planning project which aims to develop linguistic resources for the nine official African languages of South Africa in the form of corpora, more specifically spoken ...
This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…
G. M. Barabash
Full Text Available In this paper we introduce two families of periodic words (FLP-words of type 1 and FLP-words of type 2 that are connected with the Fibonacci words and investigated their properties.
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
Cowles, H Wind; Ferreira, Victor S
Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production.
Jaswal, Vikram K.; Hansen, Mikkel
Children tend to infer that when a speaker uses a new label, the label refers to an unlabeled object rather than one they already know the label for. Does this inference reflect a default assumption that words are mutually exclusive? Or does it instead reflect the result of a pragmatic reasoning...... process about what the speaker intended? In two studies, we distinguish between these possibilities. Preschoolers watched as a speaker pointed toward (Study 1) or looked at (Study 2) a familiar object while requesting the referent for a new word (e.g. 'Can you give me the blicket?'). In both studies......, despite the speaker's unambiguous behavioral cue indicating an intent to refer to a familiar object, children inferred that the novel label referred to an unfamiliar object. These results suggest that children expect words to be mutually exclusive even when a speaker provides some kinds of pragmatic...
Rumelhart, D.E.; Skokowski, P.G.; Martin, B.O.
In this project we have developed a language model based on Artificial Neural Networks (ANNs) for use in conjunction with automatic textual search or speech recognition systems. The model can be trained on large corpora of text to produce probability estimates that would improve the ability of systems to identify words in a sentence given partial contextual information. The model uses a gradient-descent learning procedure to develop a metric of similarity among terms in a corpus, based on context. Using lexical categories based on this metric, a network can then be trained to do serial word probability estimation. Such a metric can also be used to improve the performance of topic-based search by allowing retrieval of information that is related to desired topics even if no obvious set of key words unites all the retrieved items.
Larsen, Lars Bo
This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...... model roughly explains 50% of the observed variance in the user satisfaction based on measures of task success and speech recognition accuracy, a result similar to those obtained at AT&T. The applied methods are discussed and evaluated critically....
Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.
Christensen, Tanya Karoli; Jensen, Torben Juel
Through mixed models analyses of complement clauses in a corpus of spoken Danish we examine the role of sentence adverbials in relation to a word order distinction in Scandinavian signalled by the relative position of sentence adverbials and finite verb (V>Adv vs. Adv>V). The type of sentence...... adverbial was the third-most important factor in explaining the word order variation: Sentence adverbials categorized as ‘dialogic’ are significantly associated with V>Adv word order. We argue that the results are readily interpretable in the light of the semantico-pragmatic hypothesis that V>Adv signals...
Scott, C M; Windsor, J
Language performance in naturalistic contexts can be characterized by general measures of productivity, fluency, lexical diversity, and grammatical complexity and accuracy. The use of such measures as indices of language impairment in older children is open to questions of method and interpretation. This study evaluated the extent to which 10 general language performance measures (GLPM) differentiated school-age children with language learning disabilities (LLD) from chronological-age (CA) and language-age (LA) peers. Children produced both spoken and written summaries of two educational videotapes that provided models of either narrative or expository (informational) discourse. Productivity measures, including total T-units, total words, and words per minute, were significantly lower for children with LLD than for CA children. Fluency (percent T-units with mazes) and lexical diversity (number of different words) measures were similar for all children. Grammatical complexity as measured by words per T-unit was significantly lower for LLD children. However, there was no difference among groups for clauses per T-unit. The only measure that distinguished children with LLD from both CA and LA peers was the extent of grammatical error. Effects of discourse genre and modality were consistent across groups. Compared to narratives, expository summaries were shorter, less fluent (spoken versions), more complex (words per T-unit), and more error prone. Written summaries were shorter and had more errors than spoken versions. For many LLD and LA children, expository writing was exceedingly difficult. Implications for accounts of language impairment in older children are discussed.
Moers, Cornelia; Meyer, Antje; Janse, Esther
High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.
Cantu, Virginia, Comp.; And Others
Prepared by bilingual teacher aide students, this glossary provides the Spanish translation of about 1,300 English words used in the bilingual classroom. Intended to serve as a handy reference for teachers, teacher aides, and students, the glossary can also be used in teacher training programs as a vocabulary builder for future bilingual teachers…
Full Text Available Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT, with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM, which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.
Full Text Available Si può oggi affrontare il tema degli errori di italiano da una prospettiva che possa giovare contemporaneamente a docenti di italiano L1 ed L2? Noi pensiamo di sì: la ricerca glottodidattica sembra aver ormai apprestato un terreno comune alle due situazioni di apprendimento, sgombrando il campo da vecchi pregiudizi e distinzioni che appaiono ormai superate. Attraverso la contrapposizione di concetti quali “lingua parlata/lingua scritta”, “errori di lingua / errori di linguaggio”, “apprendimento spontaneo/apprendimento guidato”, “italiano L1/italiano L2”, “errori di apprendimento/errori di interferenza, si indicano diversi criteri per la interpretazione degli errori e la loro valutazione in relazione alle cause, alle situazioni comunicative, ai contesti o allo stadio di evoluzione dell’apprendimento della lingua. Errors in italian L1 and L2: interference and learning Can errors in Italian be approached in a way that benefits both L1 and L2 Italian teachers? We believe so: glottodidactic research seems to have prepared a common terrain for these two learning situations, clearing the field of old prejudices and obsolete distinctions. Through the juxtaposition of concepts like “spoken language/written language”, “language errors/speech errors”, “spontaneous learning/guided learning”, “L1 Italian/L2 Italian”, “learning errors/interference errors”, different criteria for interpreting errors and evaluating them in relation to their causes, to communicative situations, to contexts and the developmental state in learning a language are singled out.
Full Text Available In this study, we consider how native status and signal degradation influence French listeners’ segmentation of an incoming speech stream containing 'liaison', a phonological process that misaligns word and syllable boundaries. In particular, we investigate how both first language (L1 and second language (L2 French listeners compensate for the syllable-word misalignment associated with liaison while segmenting French speech, and whether compensation-for-liaison strategies differ with decreasing signal-to-noise ratios. We consider the degree to which listeners rely on lexical knowledge, acoustic-phonetic cues, and distributional information to accomplish this compensation. Listeners completed a word identification task in which they heard adjective-noun sequences with or without liaison and were presented with the word or nonword alternatives for each noun that would result depending on whether the listener did or did not compensate for liaison. Results showed that both L1-French and L2-French listeners generally preferred lexically acceptable parses over those that resulted in a stranded nonword, and both groups gave significantly fewer lexically acceptable parses under harder listening conditions. However, the L2-French listeners demonstrated a pattern of boundary placement that indicated over-compensation for liaison, suggesting that they had successfully acquired, but not fully constrained, rules about liaison.
Tamminen, Jakke; Rastle, Kathleen; Darby, Jess; Lucas, Rebecca; Williamson, Victoria J
Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.
National Aeronautics and Space Administration — The ASTER L2 Surface Reflectance is a multi-file product that contains atmospherically corrected data for both the Visible Near-Infrared (VNIR) and Shortwave...
Kranendijk, M; Salomons, G S; Gibson, K M
L-2-hydroxyglutaric aciduria (L-2-HGA) is a rare inherited autosomal recessive neurometabolic disorder caused by mutations in the gene encoding L-2-hydroxyglutarate dehydrogenase. An assay to evaluate L-2-hydroxyglutarate dehydrogenase (L-2-HGDH) activity in fibroblast, lymphoblast and/or lymphoc...... the relationship between molecular and biochemical observations. Residual activity was detected in cells derived from one L-2-HGA patient. The L-2-HGDH assay will be valuable for examining in vitro riboflavin/FAD therapy to rescue L-2-HGDH activity....
Grebenshchikov, S.E.; Batanov, G.M.; Fedyanin, O.I.
The first results of ECH experiments in the L-2M stellarator are presented. The main goal of the experiments is to investigate the physics of ECH and plasma confinement at very high values of the volume heating power density. A current free plasma is produced and heated by extraordinary waves at the second harmonic of the electron cyclotron frequency. The experimental results are compared with the numerical simulations of plasma confinement and heating processes based on neoclassical theory using the full matrix of transport coefficients and with LHD-scaling. 4 refs., 2 figs
Tunmer, William E.; Chapman, James W.
This study investigated the hypothesis that vocabulary influences word recognition skills indirectly through "set for variability", the ability to determine the correct pronunciation of approximations to spoken English words. One hundred forty children participating in a 3-year longitudinal study were administered reading and…
Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.
Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…
Simonchyk, Ala; Darcy, Isabelle
The current study investigated the potential facilitative or inhibiting effects of orthography on the lexical encoding of palatalized consonants in L2 Russian. We hypothesized that learners with stable knowledge of orthographic and metalinguistic representations of palatalized consonants would display more accurate lexical encoding of the plain/palatalized contrast. The participants of the study were 40 American learners of Russian. Ten Russian native speakers served as a control group. The materials of the study comprised 20 real words, familiar to the participants, with target coronal consonants alternating in word-final and intervocalic positions. The participants performed three tasks: written picture naming, metalinguistic, and auditory word-picture matching. Results showed that learners were not entirely familiar with the grapheme-phoneme correspondences in L2 Russian. Even though they spelled almost all of these familiar Russian words accurately, they were able to identify the plain/palatalized status of the target consonants in these words with about 80% accuracy on a metalinguistic task. The effect of orthography on the lexical encoding was found to be dependent on the syllable position of the target consonants. In intervocalic position, learners erroneously relied on vowels following the target consonants rather than the consonants themselves to encode words with plain/palatalized consonants. In word-final position, although learners possessed the orthographic and metalinguistic knowledge of the difference in the palatalization status of the target consonants-and hence had established some aspects of the lexical representations for the words-those representations appeared to lack in phonological granularity and detail, perhaps due to the lack of perceptual salience.
Targeting the specific problems learners have with language structure, these multi-sensory exercises appeal to all age groups including adults. Exercises use sight, sound and touch and are also suitable for English as an Additional Lanaguage and Basic Skills students.Word Wheels includes off-the-shelf resources including lesson plans and photocopiable worksheets, an interactive CD with practice exercises, and support material for the busy teacher or non-specialist staff, as well as homework activities.
The objectives of this study are (a) to determine if native speakers of Canadian French at different English proficiencies can use primary stress for recognizing English words and (b) to specify how the second language (L2) learners' (surface-level) knowledge of L2 stress placement influences their use of primary stress in L2 word recognition. Two…
Hemsley, Gayle; Holm, Alison; Dodd, Barbara
This study investigated cross-linguistic influence in acquisition of a second lexicon, evaluating Samoan-English sequentially bilingual children (initial mean age 4 ; 9) during their first 18 months of school. Receptive and Expressive Vocabulary tasks evaluated acquisition of four word types: cognates, matched nouns, phrasal nouns and holonyms. Each word type had varying phonological and conceptual difference between Samoan (L1) and English (L2). Results highlighted conceptual distance between L1 and L2 as a key factor in L2 lexical acquisition. The children acquired L2 lexical items earlier if their conceptual representation was similar to that of L1. Words with greater conceptual distance between L1 and L2 emerged more slowly. This suggests that L1 knowledge influences L2 lexical consolidation for sequential bilinguals. Words that require a conceptual shift from L1 take longer to consolidate and strengthen within the L2 lexicon.
Bonin, Patrick; Laroche, Betty; Perret, Cyril
The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda
.... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...
The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...
Full Text Available As an experienced teacher of advanced learners of English I am deeply aware of recurrent problems which these learners experience as regards grammatical accuracy. In this paper, I focus on researching inaccuracies in the use of verbal categories. I draw the data from a spoken learner corpus LINDSEI_CZ and analyze the performance of 50 advanced (C1–C2 learners of English whose mother tongue is Czech. The main method used is Computer-aided Error Analysis within the larger framework of Learner Corpus Research. The results reveal that the key area of difficulty is the use of tenses and tense agreements, and especially the use of the present perfect. Other error-prone aspects are also described. The study also identifies a number of triggers which may lie at the root of the problems. The identification of these triggers reveals deficiencies in the teaching of grammar, mainly too much focus on decontextualized practice, use of potentially confusing rules, and the lack of attempt to deal with broader notions such as continuity and perfectiveness. Whilst the study is useful for the teachers of advanced learners, its pedagogical implications stretch to lower levels of proficiency as well.
Kasper, Gabriele; Wagner, Johannes
This postscript discusses the contributions of the four papers in this issue to the field and positions them in relation to other studies in recent CA research on L2. The papers focus on the two arenas for second language learning: the classroom and the life world of learners. These arenas......’) are brought into being by the participants through their joint action, at particular moments in the ongoing activity be it in a classroom or a situation in the life-world. The four papers re-specify standard SLA concepts in interactional terms: attention and noticing (Kunitz), corrective feedback (Majlesi...... are widely different from each other and equally within with respect to organization and participation frameworks and the social practices deployed, but the interactional problems that participants confront inside and outside of the classroom partially overlap. Learning and teaching objects (or ‘learnables...
Full Text Available This article deals with the L2 acquisition of differences between Norwegian and English passives, and presents data to show that the acquisition of these differences by Norwegian L2 acquirers of English cannot be fully explained by positive evidence, cues, conservativism or economy. Rather, it is argued, it is natural to consider whether indirect negative evidence may facilitate acquisition by inferencing. The structures in focus are impersonal passive constructions with postverbal NPs and passive constructions with intransitive verbs. These sentences are ungrammatical in English. Chomsky (1981 proposes that this is a result of passive morphology absorbing objective case in English. There is no such case to be assigned to the postverbal NP in impersonal passives. In passive constructions with intransitive verbs, the verb does not assign objective case, so that there is no case for the passive morphology to absorb. Thus, impersonal passives have to be changed into personal passives, where the NP receives nominative case, and the objective case is free to go to the passive morphology. Intransitive verbs, however, cannot be used in the passive voice at all. Both the structures discussed in this article, i.e. are grammatical in Norwegian. However, the options available in English, viz. personal passives and active sentences, are equally possible. Åfarli (1992 therefore proposes that Norwegian has optional case absorption (passive morphology optionally absorbs case. On the basis on such observations, we may propose a parameter with the settings [+case absorption] for English, and [-case absorption], signifying optional case absorption, for Norwegian. This means that none of the structures that are grammatical in English can function as positive evidence for the [+case absorption] setting, since they are also grammatical in optional case absorption languages. The question is how this parameter is set.
Chen, Peiyao; Lin, Jie; Chen, Bingle; Lu, Chunming; Guo, Taomei
Emotional words in a bilingual's second language (L2) seem to have less emotional impact compared to emotional words in the first language (L1). The present study examined the neural mechanisms of emotional word processing in Chinese-English bilinguals' two languages by using both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Behavioral results show a robust positive word processing advantage in L1 such that responses to positive words were faster and more accurate compared to responses to neutral words and negative words. In L2, emotional words only received higher accuracies than neutral words. In ERPs, positive words elicited a larger early posterior negativity and a smaller late positive component than neutral words in L1, while a trend of reduced N400 component was found for positive words compared to neutral words in L2. In fMRI, reduced activation was found for L1 emotional words in both the left middle occipital gyrus and the left cerebellum whereas increased activation in the left cerebellum was found for L2 emotional words. Altogether, these results suggest that emotional word processing advantage in L1 relies on rapid and automatic attention capture while facilitated semantic retrieval might help processing emotional words in L2. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.
Word subject domains have been widely used to improve the perform-ance of word sense disambiguation al-gorithms. However, comparatively little effort has been devoted so far to the disambiguation of word subject do-mains. The few existing approaches have focused on the development of al-gorithms specific to word domain dis-ambiguation. In this paper we explore an alternative approach where word domain disambiguation is achieved via word sense disambiguation. Our study shows that this approach yields very strong results, suggesting that word domain disambiguation can be ad-dressed in terms of word sense disam-biguation with no need for special purpose algorithms.
Michael Kevin Olsen
Full Text Available This study investigates L2 Spanish rhotic production in intermediate learners of Spanish, specifically addressing the duration of the influence of L1 English rhotic articulations and a phonetic environment involving English taps on the acquisition of Spanish taps and trills that Olsen (2012 found. Results from multiple linear regressions involving thirty-five students in Spanish foreign language classes show that the effect of English rhotic articulations evident in beginners has disappeared after four semesters of Spanish study. However, results from paired samples t-tests show that these more advanced learners produced accurate taps significantly more in words containing phonetic environments that produce taps in English. This effect is taken as evidence that L1 phonetic influences have a shorter duration on L2 production than do L1 phonological influences. These results provide insights into L2 rhotic acquisition which Spanish educators and students can use to formulate reasonable pronunciation expectations.
Paulus Insap Santosa
Full Text Available People with normal senses use spoken language to communicate with others. This method cannot be used by those with hearing and speech impaired. These two groups of people will have difficulty when they try to communicate to each other using their own language. Sign language is not easy to learn, as there are various sign languages, and not many tutors are available. This research focused on a simple word recognition gesture based on characters that form a word to be recognized. The method used for character recognition was the nearest neighbour method. This method identified different fingers using the different markers attached to each finger. Testing a simple word gesture recognition is done by providing a series of characters that make up the intended simple word. The accuracy of a simple word gesture recognition depended upon the accuracy of recognition of each character.
Full Text Available The aim of the present study was to explore the central (e.g., lexical processing and peripheral processes (motor preparation and execution underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: 1 first keystroke latency and 2 keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analysed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals. The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.
Baus, Cristina; Strijkers, Kristof; Costa, Albert
The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.
Caffarra, Sendy; Martin, Clara D; Lizarazu, Mikel; Lallier, Marie; Zarraga, Asier; Molinaro, Nicola; Carreiras, Manuel
Studies on adults suggest that reading-induced brain changes might not be limited to linguistic processes. It is still unclear whether these results can be generalized to reading development. The present study shows to which extent neural responses to verbal and nonverbal stimuli are reorganized while children learn to read. MEG data of thirty Basque children (4-8y) were collected while they were presented with written words, spoken words and visual objects. The evoked fields elicited by the experimental stimuli were compared to their scrambled counterparts. Visual words elicited left posterior (200-300ms) and temporal activations (400-800ms). The size of these effects increased as reading performance improved, suggesting a reorganization of children's visual word responses. Spoken words elicited greater left temporal responses relative to scrambles (300-700ms). No evidence for the influence of reading expertise was observed. Brain responses to objects were greater than to scrambles in bilateral posterior regions (200-500ms). There was a greater left hemisphere involvement as reading errors decreased, suggesting a strengthened verbal decoding of visual configurations with reading acquisition. The present results reveal that learning to read not only influences written word processing, but also affects visual object recognition, suggesting a non-language specific impact of reading on children's neural mechanisms. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Full Text Available Studies on adults suggest that reading-induced brain changes might not be limited to linguistic processes. It is still unclear whether these results can be generalized to reading development. The present study shows to which extent neural responses to verbal and nonverbal stimuli are reorganized while children learn to read. MEG data of thirty Basque children (4–8y were collected while they were presented with written words, spoken words and visual objects. The evoked fields elicited by the experimental stimuli were compared to their scrambled counterparts. Visual words elicited left posterior (200–300 ms and temporal activations (400–800 ms. The size of these effects increased as reading performance improved, suggesting a reorganization of children’s visual word responses. Spoken words elicited greater left temporal responses relative to scrambles (300–700 ms. No evidence for the influence of reading expertise was observed. Brain responses to objects were greater than to scrambles in bilateral posterior regions (200–500 ms. There was a greater left hemisphere involvement as reading errors decreased, suggesting a strengthened verbal decoding of visual configurations with reading acquisition. The present results reveal that learning to read not only influences written word processing, but also affects visual object recognition, suggesting a non-language specific impact of reading on children’s neural mechanisms.
This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it…
Stölten, Katrin; Abrahamsson, Niclas; Hyltenstam, Kenneth
As part of a research project on the investigation of second language (L2) ultimate attainment in 41 Spanish early and late near-native speakers of L2 Swedish, the present study reports on voice onset time (VOT) analyses of the production of Swedish word-initial voiceless stops, /p t k/. Voice onset time is analyzed in milliseconds as well as in…
This paper reports the findings of a study that delved into the relationship between mother tongue (L1) word order competence and second language (L2) writing skills, taking the case of Acoli and English respectively. It reports that, triggered by concerns that schools that instruct their pupils in L1 before introducing L2 ...
The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…
approach. In comparing these two approaches, Chomsky writes: Our main conclusion will be that familiar linguistic theory has only a limited adequacy...from Chomsky : In general, we introduce an element or a sentence form transformationally only when by so doing we manage to eliminate special...testing and debugging of functionally isolated modules. LISP was considered because of the facility with which it can manipulate word strings. The
Sarker, Bijon K; Baek, Seunghyun
The current study investigated the distinction of L2 (second language) English article choice sensitivity in fifty-three L1-Korean L2-English learners in semantic contexts. In the context of English as a foreign language, the participants were divided into two groups based on grammatical ability as determined by their performance on a cloze test. In addition, a forced-choice elicitation test and a writing production test were administered to assess, respectively, the participants' receptive and productive article choice abilities. Regardless of grammatical ability, the results disclosed the overuse of the indefinite a in the [[Formula: see text]definite, -specific] context and the definite the in the [-definite, [Formula: see text]specific] context on the forced-choice elicitation test. In the [[Formula: see text]definite, [Formula: see text]specific] and [-definite, -specific] contexts, however, the overuse of either the indefinite a or the definite the, respectively, was less likely. Furthermore, it was revealed on the writing test that the participants more accurately used the definite the than the indefinite a, and they were also found to unreasonably omit more articles than to add or substitute articles on the writing production test. The findings across the two tests indicate that L1-Korean L2-English learners are more likely to have intrinsic difficulties transferring their L1 noun phrase (NP) knowledge to L2 NP knowledge owing to structural discrepancies and complex interfaces between L1 NPs and L2 NPs with respect to syntactic, semantic and pragmatic/discourse language subsystems.
Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.
In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172
Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783
Bonin, Patrick; Boyer, Bruno; Méot, Alain; Fayol, Michel; Droit, Sylvie
A set of 142 photographs of actions (taken from Fiez & Tranel, 1997) was standardized in French on name agreement, image agreement, conceptual familiarity, visual complexity, imageability, age of acquisition, and duration of the depicted actions. Objective word frequency measures were provided for the infinitive modal forms of the verbs and for the cumulative frequency of the verbal forms associated with the photographs. Statistics on the variables collected for action items were provided and compared with the statistics on the same variables collected for object items. The relationships between these variables were analyzed, and certain comparisons between the current database and other similar published databases of pictures of actions are reported. Spoken and written naming latencies were also collected for the photographs of actions, and multiple regression analyses revealed that name agreement, image agreement, and age of acquisition are the major determinants of action naming speed. Finally, certain analyses were performed to compare object and action naming times. The norms and the spoken and written naming latencies corresponding to the pictures are available on the Internet (http://www.psy.univ-bpclermont.fr/~pbonin/pbonin-eng.html) and should be of great use to researchers interested in the processing of actions.
Bultena, S.S.; Dijkstra, A.F.J.; Hell, J.G. van
Noun translation equivalents that share orthographic and semantic features, called "cognates", are generally recognized faster than translation equivalents without such overlap. This cognate effect, which has also been obtained when cognates and noncognates were embedded in a sentence context,
Aye Min Soe
Full Text Available Abstract The main idea of this paper is to develop a speech recognition system. By using this system smart home appliances are controlled by spoken words. The spoken words chosen for recognition are Fan On Fan Off Light On Light Off TV On and TV Off. The input of the system takes speech signals to control home appliances. The proposed system has two main parts speech recognition and smart home appliances electronic control system. Speech recognition is implemented in MATLAB environment. In this process it contains two main modules feature extraction and feature matching. Mel Frequency Cepstral Coefficients MFCC is used for feature extraction. Vector Quantization VQ approach using clustering algorithm is applied for feature matching. In electrical home appliances control system RF module is used to carry command signal from PC to microcontroller wirelessly. Microcontroller is connected to driver circuit for relay and motor. The input commands are recognized very well. The system is a good performance to control home appliances by spoken words.
Goldrick, Matthew; Folk, Jocelyn R.; Rapp, Brenda
Many theories of language production and perception assume that in the normal course of processing a word, additional non-target words (lexical neighbors) become active. The properties of these neighbors can provide insight into the structure of representations and processing mechanisms in the language processing system. To infer the properties of neighbors, we examined the non-semantic errors produced in both spoken and written word production by four individuals who suffered neurological injury. Using converging evidence from multiple language tasks, we first demonstrate that the errors originate in disruption to the processes involved in the retrieval of word form representations from long-term memory. The targets and errors produced were then examined for their similarity along a number of dimensions. A novel statistical simulation procedure was developed to determine the significance of the observed similarities between targets and errors relative to multiple chance baselines. The results reveal that in addition to position-specific form overlap (the only consistent claim of traditional definitions of neighborhood structure) the dimensions of lexical frequency, grammatical category, target length and initial segment independently contribute to the activation of non-target words in both spoken and written production. Additional analyses confirm the relevance of these dimensions for word production showing that, in both written and spoken modalities, the retrieval of a target word is facilitated by increasing neighborhood density, as defined by the results of the target-error analyses. PMID:20161591
Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Simon David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinge, Dennis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lin, Paul T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vaughan, Courtenay T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cook, Jeanine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Edwards, Harold C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajan, Mahesh [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoekstra, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
For the FY15 ASC L2 Trilab Codesign milestone Sandia National Laboratories performed two main studies. The first study investigated three topics (performance, cross-platform portability and programmer productivity) when using OpenMP directives and the RAJA and Kokkos programming models available from LLNL and SNL respectively. The focus of this first study was the LULESH mini-application developed and maintained by LLNL. In the coming sections of the report the reader will find performance comparisons (and a demonstration of portability) for a variety of mini-application implementations produced during this study with varying levels of optimization. Of note is that the implementations utilized including optimizations across a number of programming models to help ensure claims that Kokkos can provide native-class application performance are valid. The second study performed during FY15 is a performance assessment of the MiniAero mini-application developed by Sandia. This mini-application was developed by the SIERRA Thermal-Fluid team at Sandia for the purposes of learning the Kokkos programming model and so is available in only a single implementation. For this report we studied its performance and scaling on a number of machines with the intent of providing insight into potential performance issues that may be experienced when similar algorithms are deployed on the forthcoming Trinity ASC ATS platform.
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.
Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li
Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017
Despite the growing number of studies highlighting the complex process of acquiring second language (L2) word recognition skills, comparatively little research has examined the relationship between word recognition and passage-level reading ability in L2 learners; further, the existing results are inconclusive. This study aims to help fill the…
Full Text Available Speech audiometry is one of the standard methods used to diagnose the type of hearing loss and to assess the communication function of the patient by determining the level of the patient’s ability to understand and repeat words presented to him or her in a hearing test. For this purpose, the Slovenian adaptation of the German tests developed by Hahlbrock (1953, 1960 – the Freiburg Monosyllabic Word Test and the Freiburg Number Test – are used in Slovenia (adapted in 1968 by Pompe. In this paper we focus on the Freiburg Monosyllabic Word Test for Slovenian, which has been criticized by patients as well as in the literature for the unequal difficulty and frequency of the words, with many of these being extremely rare or even obsolete. As part of the patient’s communication function is retrieving the meaning of individual words by guessing, the less frequent and consequently less familiar words do not contribute to reliable testing results. We therefore adapt the test by identifying and removing such words and supplement them with phonetically similar words to preserve the phonetic balance of the list. The words used for replacement are extracted from the written corpus of Slovenian Gigafida and the spoken corpus of Slovenian GOS, while the optimal combinations of words are established by using computational algorithms.
Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih
The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.
Compared with the study of acquisition of syntax and morphology, there is a relative lack of research on the acquisition of phonology, the L2 acquisition of word stress in particular. This paper investigates the production of word stress by 70 Chinese college students in their reading aloud. Altogether 350 minutes' recordings were collected and…
Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois
The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.
Banas, E.; Ducorps, A.
Monitoring software for the L2 Topological Trigger in H1 experiment consists of two parts working on two different computers. The hardware read-out and data processing is done on a fast FIC 8234 computer working with the OS9 real time operating system. The Macintosh Quadra is used as a Graphic User Interface for accessing the OS9 trigger monitoring software. The communication between both computers is based on the parallel connection between the Macintosh and the VME crate, where the FIC computer is placed. The special designed protocol (client-server) is used to communicate between both nodes. The general scheme of monitoring for the L2 Topological Trigger and detailed description of using of the monitoring software in both nodes are given in this guide. (author)
Full Text Available Discourse markers are a collection of one-word or multiword terms that help language users organize their utterances on the grammar, semantic, pragmatic and interactional levels. Researchers have characterized some of their roles in written and spoken discourse (Halliday & Hasan, 1976, Schffrin, 1988, 2001. Following this trend, this paper advances a discussion of discourse markers in contemporary academic spoken English. Through quantitative and qualitative analyses of the use of the discourse marker ‘you know’ in the Michigan Corpus of Academic Spoken English (MICASE we describe its frequency in this corpus, its collocation on the sentence level and its interactional functions. Grammatically, a concordance analysis shows that you know (as other discourse markers is linguistically fl exible as it seems to be placed in any grammatical slot of an utterance. Interactionally, a qualitative analysis indicates that its use in contemporary English goes beyond the uses described in the literature. We defend that besides serving as a hedging strategy (Lakoff, 1975, you know also serves as a powerful face-saving (Goffman, 1955 technique which constructs students’ identities vis-à-vis their professors’ and vice-versa.
Lesaux, Nonie K; Crosson, Amy C; Kieffer, Michael J; Pierce, Margaret
English reading comprehension skill development was examined in a group of 87 native Spanish-speakers developing English literacy skills, followed from fourth through fifth grade. Specifically, the effects of Spanish (L1) and English (L2) oral language and word reading skills on reading comprehension were investigated. The participants showed average word reading skills and below average comprehension skills, influenced by low oral language skills. Structural equation modeling confirmed that L2 oral language skills had a large, significant effect on L2 reading comprehension, whereas students' word-level reading skills, whether in L1 or L2, were not significantly related to English reading comprehension in three of four models fitted. The results converge with findings from studies with monolinguals demonstrating the influence of oral language on reading comprehension outcomes, and extend these findings by showing that, for language minority learners, L2 oral language exerts a stronger influence than word reading in models of L2 reading.
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Patro, Chhayakanta; Mendel, Lisa Lucks
Purpose: The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. Method: Listeners with CIs as well as those with normal hearing (NH)…
Speakers of gender-agreement languages use gender-marked elements of the noun phrase in spoken-word recognition: A congruent marking on a determiner or adjective facilitates the recognition of a subsequent noun, while an incongruent marking inhibits its recognition. However, while monolinguals and early language learners evidence this…
Milburn, Trelani F.; Hipfner-Boucher, Kathleen; Weitzman, Elaine; Greenberg, Janice; Pelletier, Janette; Girolametto, Luigi
Preschool children begin to represent spoken language in print long before receiving formal instruction in spelling and writing. The current study sought to identify the component skills that contribute to preschool children's ability to begin to spell words and write their name. Ninety-five preschool children (mean age = 57 months) completed a…
Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the
Tincoff, Ruth; Jusczyk, Peter W.
Comprehending spoken words requires a lexicon of sound patterns and knowledge of their referents in the world. Tincoff and Jusczyk (1999) demonstrated that 6-month-olds link the sound patterns "Mommy" and "Daddy" to video images of their parents, but not to other adults. This finding suggests that comprehension emerges at this young age and might…
Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano
Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…
Vales, Catarina; Smith, Linda B.
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…
Pavlenko's keynote paper calls for a rethinking of models of the mental lexicon in the light of recent research into emotion and bilingualism. The author makes a convincing case for the inclusion of affective aspects in the study of the mental lexicon. Indeed, the knowledge of the degree of emotionality of a word and of its affective valence is…
Sparks, Richard L.; Patton, Jon; Ganschow, Leonore; Humbach, Nancy
The study examined whether individual differences in high school first language (L1) reading achievement and print exposure would account for unique variance in second language (L2) written (word decoding, spelling, writing, reading comprehension) and oral (listening/speaking) proficiency after adjusting for the effects of early L1 literacy and…
Takeuchi, Osamu; Ikeda, Maiko; Mizumoto, Atsushi
This article explores the cerebral mechanism of reading aloud activities in L2 learners. These activities have been widely used in L2 learning and teaching, and its effect has been reported in various Asian L2 learning contexts. However, the reasons for its effectiveness have not been examined. In order to fill in this gap, two studies using a…
aus der Wieschen, Maria Vanessa
Social Interaction and L2 Classroom Discourse investigates interactional practices in L2 classrooms. Using Conversation Analysis, the book unveils the processes underlying the co-construction of mutual understanding in potential interactional troubles in L2 classrooms – such as claims of insuffic...
This study explores the effects of prolonged L2 exposure on L1 grammar, and seeks to understand the extent to which mental representations of the L1 are modified under influence of the L2. The constructions under examination are overt and null subjects in Italian L1, Dutch L2 and how these forms are
The aim of the present study was cloning and expressing the fragment coding for L2 region of human EGFR for the production of recombinant L2 protein. The total RNA from A431 cells line was extracted and used for amplification of the sequence coding for L2 domain of EGFR by reverse transcriptase-polymerase chain ...
Foltz, Franz; Foltz, Frederick
The authors explore how technique via propaganda has replaced the word with images creating a mass society and limiting the ability of people to act as individuals. They begin by looking at how words affect human society and how they have changed over time. They explore how technology has altered the meaning of words in order to create a more…
Full Text Available The study tested the impact of the phonological and lexical distance between a dialect of Palestinian Arabic spoken in the north of Israel (SpA and Modern Standard Arabic (StA or MSA on word and non-word repetition in children with specific language impairment (SLI and in typically developing (TD age-matched controls. Fifty kindergarten children (25 SLI, 25 TD; mean age 5;5 and fifty first grade children (25 SLI, 25 TD; mean age 6:11 were tested with a repetition task for 1–4 syllable long real words and pseudo words; Items varied systematically in whether each encoded a novel StA phoneme or not, namely a phoneme that is only used in StA but not in the spoken dialect targeted. Real words also varied in whether they were lexically novel, meaning whether the word is used only in StA, but not in SpA. SLI children were found to significantly underperform TD children on all repetition tasks indicating a general phonological memory deficit. More interesting for the current investigation is the observed strong and consistent effect of phonological novelty on word and non-word repetition in SLI and TD children, with a stronger effect observed in SLI. In contrast with phonological novelty, the effect of lexical novelty on word repetition was limited and it did not interact with group. The results are argued to reflect the role of linguistic distance in phonological memory for novel linguistic units in Arabic SLI and, hence, to support a specific Linguistic Distance Hypothesis of SLI in a diglossic setting. The implications of the findings for assessment, diagnosis and intervention with Arabic speaking children with SLI are discussed.
De Grauwe, Sophie; Lemhöfer, Kristin; Willems, Roel M; Schriefers, Herbert
In this functional magnetic resonance imaging (fMRI) long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g., wegleggen "put aside." Many behavioral and some fMRI studies suggest that native (L1) speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG). In non-native (L2) speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: some studies find more reliance on holistic (i.e., non-decompositional) processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g., omvallen "fall down") were preceded by their stem (e.g., vallen "fall") with a lag of 4-6 words ("primed"); the other half (e.g., inslapen "fall asleep") were not ("unprimed"). L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both region of interest analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2) in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically.
Sophie eDe Grauwe
Full Text Available In this fMRI long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g. wegleggen ‘put aside’. Many behavioral and some fMRI studies suggest that native (L1 speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG. In non-native (L2 speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: Some studies find more reliance on holistic (i.e. non-decompositional processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g. omvallen ‘fall down’ were preceded by their stem (e.g. vallen ‘fall’ with a lag of 4 to 6 words (‘primed’; the other half (e.g. inslapen ‘fall asleep’ were not (‘unprimed’. L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both ROI analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2 in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically.
Steenweg, Marjan E; Jakobs, Cornelis; Errami, Abdellatif
L-2-Hydroxyglutaric aciduria (L2HGA) is a rare, neurometabolic disorder with an autosomal recessive mode of inheritance. Affected individuals only have neurological manifestations, including psychomotor retardation, cerebellar ataxia, and more variably macrocephaly, or epilepsy. The diagnosis of ...
Seyyed Fariborz Pishdadi Motlagh
Full Text Available Input enhancement's role to promote learners’ awareness in L2 contexts has caused a tremendous amount of research. Conspicuously, by regarding all aspects of input enhancement, the study aimed to find out how differently many kinds of input enhancement factors such as bolding, underlining, and capitalizing impact on L2 learners’ vocabulary acquiring. Furthermore, the study was conducted through a quasi-experimental design with a proficiency test to find how homogeneous the groups are. Four classes were selected as the experimental groups (n =80, and each class was conducted by one of the input enhancement main categories compared with the control group. Subjects attended in eight sessions to make them familiar with advantages of input enhancement in relation to vocabulary learning. Each group received different strategies but control group received no treatment and then, the researcher taught and employed those inputs in texts along with target words. Learners’ progress was measured during the eight sessions of employing those inputs in responding to vocabulary questions. One-Sample Kolmogorov-Smirnov Test, One-way ANOVAs series along with LSD and post hoc comparisons showed that three inputs were effective in responding to target vocabulary words and they compared and contrasted with control group but the bolding group did better than the other groups. Finally, bolding target words were more effective in fostering L2 learners’ vocabulary knowledge learning. These outcomes propose that using input enhancement to answer target words are the most useful factors, especially bolding as a significant input in this study outperformed the other ones in developing learners’ awareness to answer vocabulary tests. It can also be concluded that capitalizing is the least effective input compared to underlining and bolding in terms of their efficacy. Keywords: Focus on form and Implicit Fonf, Input enhancement as focus on form, Vocabulary
Venker, Courtney E.
Deficits in visual disengagement are one of the earliest emerging differences in infants who are later diagnosed with autism spectrum disorder. Although researchers have speculated that deficits in visual disengagement could have negative effects on the development of children with autism spectrum disorder, we do not know which skills are…
Landi, Nicole; Crowley, Michael J.; Wu, Jia; Bailey, Christopher A.; Mayes, Linda C.
Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of…
Scharinger, Mathias; Monahan, Philip J; Idsardi, William J
Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception. Copyright © 2011 Elsevier Inc. All rights reserved.
Piai, V.; Roelofs, A.P.A.; Acheson, D.J.; Takashima, A.
ulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and
This article presents research exploring the knowledge pupils bring to texts introduced to them for literary study, how they share knowledge through talk, and how it is elicited by the teacher in the course of an English lesson. It sets classroom discussion in a context where new examination requirements diminish the relevance of social, cultural…
Full Text Available This paper examines the impact of digital media on the relationship between writing, performance, and textuality from the perspective of literate verbal artists in Mali. It considers why some highly educated verbal artists in urban Africa self-identify as writers despite the oralizing properties of new media, and despite the fact that their own works circulate entirely through performance. The motivating factors are identified as a desire to present themselves as composers rather than as performers of texts, and to differentiate their work from that of minimally educated performers of texts associated with traditional orality.
Comments by Roy Reider on chemical criticality control, the fundamentals of safety, policy and responsibility, on written procedures, profiting from accidents, safety training, early history of criticality safety, requirements for the possible, the value of enlightened challenge, public acceptance of a new risk, and on prophets of doom are presented
Full Text Available This essay explores the relationship between religion and language through a literature review of animist scholarship and, in particular, a case study of the animist worldview of Hmong immigrants to the United States. An analysis of the existing literature reveals how the Hmong worldview (which has remained remarkably intact despite widely dispersed settlements both informs and is informed by the Hmong language. Hmong is contrasted with English with regard to both languages’ respective affinities to the scientific worldview and Christianity. I conclude that Hmong and other "pre-scientific" languages have fundamental incompatibilities with the Western worldview (which both informs and is informed by dualistic linguistic conventions of modern language, a modern notion of scientific causality, and Judeo-Christian notions of the body/soul dichotomy. This incompatibility proves to be a major stumbling block for Western scholars of animist religion, who bring their own linguistic and cultural biases to their scholarship.
Scarbrough, Burke; Allen, Anna-Ruth
Workshop pedagogy is a staple of writing classrooms at all levels. However, little research has explored the pedagogical moves that can address longstanding critiques of writing workshop, nor the sorts of rhetorical challenges that teachers and students in secondary classrooms can tackle through workshops. This article documents and analyzes the…
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language
Steenweg, Marjan E; Jakobs, Cornelis; Errami, Abdellatif
L-2-Hydroxyglutaric aciduria (L2HGA) is a rare, neurometabolic disorder with an autosomal recessive mode of inheritance. Affected individuals only have neurological manifestations, including psychomotor retardation, cerebellar ataxia, and more variably macrocephaly, or epilepsy. The diagnosis of L2...
This article offers a model of Arabic word reading according to which three conspicuous features of the Arabic language and orthography shape the development of word reading in this language: (a) vowelization/vocalization, or the use of diacritical marks to represent short vowels and other features of articulation; (b) morphological structure, namely, the predominance and transparency of derivational morphological structure in the linguistic and orthographic representation of the Arabic word; and (c) diglossia, specifically, the lexical and lexico-phonological distance between the spoken and the standard forms of Arabic words. It is argued that the triangulation of these features governs the acquisition and deployment of reading mechanisms across development. Moreover, the difficulties that readers encounter in their journey from beginning to skilled reading may be better understood if evaluated within these language-specific features of Arabic language and orthography.
Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel
The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.
Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen
To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.
Full Text Available Assuming that word-prosodic parameters are organized into a hierarchical tree where certain parameters are embedded under others, this paper proposes the Prosodic Acquisition Path Hypothesis (PAPH. The PAPH predicts different levels of difficulty and paths to be followed by L2 (and L1 learners based on the typological properties of their L1 and the L2 they are learning. On the PAPH, L2 acquisition is assumed to be brought along via a process of parameter resetting. During this process, certain parameters are expected to be easier to reset than others, based on such factors as economy, markedness, and robustness of the input, which is reflected in part by their location on the tree of parameters proposed in this paper. Evidence for the proposal comes from previous formal phonological and L1 acquisition literature. The predictions as concerns the learning path are tested through an experiment which examines productions of English-speaking learners of Turkish, thereby involving two languages that are maximally different from each other regarding the location of word-level prominence, as well as how it is assigned. The PAPH is a restrictive (and falsifiable approach, where the predictions regarding the stages learners go through are constrained both by certain learning principles and by the options made available by UG.
Henderson, Jennifer A
Words are all around us to the point that their complexity is lost in familiarity. The term “word” itself can ambiguously refer to different linguistic concepts: orthographic words, phonological words, grammatical words, word-forms, lexemes, and to an extent lexical items. While it is hard to come up with exception-less criteria for wordhood, some typical properties are that words are writeable and spellable, consist of morphemes, are syntactic units, carry meaning, and interrelate with oth...
Adesope, Olusola O.; Nesbit, John C.
An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…
Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena
The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…
assessment instrument used to assess formally the spoken-language educational interpreters at. Stellenbosch University (SU). Research ..... Is the interpreter suited to the module? Is the interpreter easier to follow? Technical. Microphone technique. Lag. Completeness. Language use. Vocabulary. Role. Personal Objectives ...
This paper presents two chatbot systems, ALICE and. Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the. Corpus of Spoken Afrikaans (Korpus Gesproke Afrikaans) to retrain the ALICE chatbot system with human ...
Therefore, this paper examined vowel insertion in the spoken French of 50 Ijebu Undergraduate French Learners (IUFLs) in Selected Universities in South West of Nigeria. The data collection for this study was through tape-recording of participants' production of 30 sentences containing both French vowel and consonant ...
Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.
Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,
ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn
This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.
English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…
Canisius, S.V.M.; van den Bosch, A.; Decadt, B.; Hoste, V.; De Pauw, G.
We describe the development of a Dutch memory-based shallow parser. The availability of large treebanks for Dutch, such as the one provided by the Spoken Dutch Corpus, allows memory-based learners to be trained on examples of shallow parsing taken from the treebank, and act as a shallow parser after
Discusses comparative analysis of spoken and written versions of a narrative to demonstrate that features which have been identified as characterizing oral discourse are also found in written discourse and that the written short story combines syntactic complexity expected in writing with features which create involvement expected in speaking.…
Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...
van der Werff, Laurens Bastiaan
This thesis introduces a novel framework for the evaluation of Automatic Speech Recognition (ASR) transcripts in an Spoken Document Retrieval (SDR) context. The basic premise is that ASR transcripts must be evaluated by measuring the impact of noise in the transcripts on the search results of a
This paper sets out to examine the phonological interference in the spoken English performance of the Izon speaker. It emphasizes that the level of interference is not just as a result of the systemic differences that exist between both language systems (Izon and English) but also as a result of the interlanguage factors such ...
This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult
Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…
This article examines the impact of the hegemony of English, as a common lingua franca, referred to as a global language, on the indigenous languages spoken in Nigeria. Since English, through the British political imperialism and because of the economic supremacy of English dominated countries, has assumed the ...
Apr 12, 2018 ... sound of that language. These language-specific properties can be exploited to identify a spoken language reliably. Automatic language identification has emerged as a prominent research area in. Indian languages processing. People from different regions of India speak around 800 different languages.
Moran, Catherine; Kirk, Cecilia; Powell, Emma
Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…
Grant, Lynn E.
This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…
Tao, Hongyin; McCarthy, Michael J.
Reexamines the notion of non-restrictive relative clauses (NRRCs) in light of spoken corpus evidence, based on analysis of 692 occurrences of non-restrictive "which"-clauses in British and American spoken English data. Reviews traditional conceptions of NRRCs and recent work on the broader notion of subordination in spoken grammar.…
The development, construction, and test of a 100-word vocabulary near real time word recognition system are reported. Included are reasonable replacement of any one or all 100 words in the vocabulary, rapid learning of a new speaker, storage and retrieval of training sets, verbal or manual single word deletion, continuous adaptation with verbal or manual error correction, on-line verification of vocabulary as spoken, system modes selectable via verification display keyboard, relationship of classified word to neighboring word, and a versatile input/output interface to accommodate a variety of applications.
Baten, Kristof; Hofman, Fabrice; Loeys, Tom
This study investigates how categorial (word class) semantics influences cross-linguistic interactions when reading in L2. Previous homograph studies paid little attention to the possible influence of different word classes in the stimulus material on cross-linguistic activation. The present study examines the word recognition performance of…
Kowal, Sabine; O'Connell, Daniel C
The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.
Benoit, Laurent; Lehalle, Henri; Molina, Michèle; Tijus, Charles; Jouen, François
This study investigates when young children develop the ability to map between three numerical representations: arrays, spoken number words, and digits. Children (3, 4, and 5 years old) had to map between the two directions (e.g., array-to-digit vs. digit-to-array) of each of these three representation pairs, with small (1-3) and large numbers (4-6). Five-year-olds were at ceiling in all tasks. Three-year-olds succeeded when mapping between arrays and number words for small numbers (but not large numbers), and failed when mapping between arrays and digits and between number words and digits. The main finding was that four-year-olds performed equally well when mapping between arrays and number words and when mapping between arrays and digits. However, they performed more poorly when mapping between number words and digits. Taken together, these results suggest that children first learn to map number words to arrays, then learn to map digits to arrays and finally map number words to digits. These findings highlight the importance of directly exploring when children acquire digits rather than assuming that they acquire digits directly from number words. Copyright © 2013 Elsevier B.V. All rights reserved.
Ponari, Marta; Rodríguez-Cuadrado, Sara; Vinson, David; Fox, Neil; Costa, Albert; Vigliocco, Gabriella
Effects of emotion on word processing are well established in monolingual speakers. However, studies that have assessed whether affective features of words undergo the same processing in a native and nonnative language have provided mixed results: Studies that have found differences between native language (L1) and second language (L2) processing attributed the difference to the fact that L2 learned late in life would not be processed affectively, because affective associations are established during childhood. Other studies suggest that adult learners show similar effects of emotional features in L1 and L2. Differences in affective processing of L2 words can be linked to age and context of learning, proficiency, language dominance, and degree of similarity between L2 and L1. Here, in a lexical decision task on tightly matched negative, positive, and neutral words, highly proficient English speakers from typologically different L1s showed the same facilitation in processing emotionally valenced words as native English speakers, regardless of their L1, the age of English acquisition, or the frequency and context of English use. (c) 2015 APA, all rights reserved).
vir die doeleindes van professionelevertaling. Die artikel weerlê drie algemeen aanvaarde opvattings: (a 'n tweetalige woordeboek isdieselfde as 'n vertaalwoordeboek; (b 'n tweetalige woordeboek is die bron van onmiddellik invoegbareleksikale ekwivalente van lemmata; en (c 'n tweetalige woordeboek verskaf uitsluitliksemanties-pragmatiese inligting, terwyl 'n eentalige woordeboek altyd betekenisse van leksikaleitems omskryf. Daar word beweer dat tweetalige leksikografie gebaseer behoort te wees op 'n duidelikomskrewe begrip van die toekomstige naslaanwerk, gespesifiseer in terme van leksikografieseparameters: (a die beoogde gebruikersgroep; (b die doel van die woordeboek; ens. 'n "Ideale"tweetalige L1 →L2- vertaal-georiënteerde woordeboek behoort 'n naslaanwerk te wees wat beplan isom die doel van teksproduksie in L2 te dien. By die omstandigheid van professionele vertaling is L2-teksproduksie onderhewig aan twee soorte beperkings: (a beperkings opgelê deur die teikentaalen -kultuur; en (b beperkings opgelê deur die bronteks geskryf in L1. As beperkings van dietweede soort in beginsel nie voorsien kan word nie, kan en behoort dié van die eerste soortverantwoord te word in 'n tweetalige woordeboek wat vir die professionele vertaler beplan is. Dieartikel spesifiseer 'n aantal vereistes van so 'n naslaanwerk.
Sleutelwoorde: TWEETALIGE WOORDEBOEK, VERTAALWOORDEBOEK, SEMANTIESPRAGMATIESEEKWIVALENSIE, INTERLINGUALE EKWIVALENSIE, INTRALINGUALEEKWIVALENSIE, DEFINIEERTEGNIEK, EKWIVALENSIEDEFINISIE, PERIFRASTIESE DEFINISIE,VERKLARENDE DEFINISIE, PROFESSIONELE VERTALING, VERTAALTEORIE, VERTAALEENHEID
Ziegler, Johannes C; Ferrand, Ludovic; Montant, Marie
In this study, we investigated orthographic influences on spoken word recognition. The degree of spelling inconsistency was manipulated while rime phonology was held constant. Inconsistent words with subdominant spellings were processed more slowly than inconsistent words with dominant spellings. This graded consistency effect was obtained in three experiments. However, the effect was strongest in lexical decision, intermediate in rime detection, and weakest in auditory naming. We conclude that (1) orthographic consistency effects are not artifacts of phonological, phonetic, or phonotactic properties of the stimulus material; (2) orthographic effects can be found even when the error rate is extremely low, which rules out the possibility that they result from strategies used to reduce task difficulty; and (3) orthographic effects are not restricted to lexical decision. However, they are stronger in lexical decision than in other tasks. Overall, the study shows that learning about orthography alters the way we process spoken language.
Li, Qi; Li, Tianshi; Chang, Baobao
Word embeddings play a significant role in many modern NLP systems. Since learning one representation per word is problematic for polysemous words and homonymous words, researchers propose to use one embedding per word sense. Their approaches mainly train word sense embeddings on a corpus. In this paper, we propose to use word sense definitions to learn one embedding per word sense. Experimental results on word similarity tasks and a word sense disambiguation task show that word sense embeddi...
The study attempts to investigate factors underlying the development of spellers’ sensitivity to phonological context in English. Native English speakers and Russian speakers of English as a second language (ESL) were tested on their ability to use information about the coda to predict the spelling...... on the information about the coda when spelling vowels in nonwords. In both native and non-native speakers, context sensitivity was predicted by English word spelling; in Russian ESL speakers this relationship was mediated by English proficiency. L1 spelling proficiency did not facilitate L2 context sensitivity...
Full Text Available In the present study, the comparative effects of comprehensible input, output and corrective feedback on the receptive acquisition of L2 vocabulary items were investigated. Two groups of beginning EFL learners participated in the study. The control group received comprehensible input only, while the experimental group received input and was required to produce written output. They also received corrective feedback on their lexical errors if any. This could result in the production of modified output. The results of the study indicated that (a the group which produced output and received feedback (if necessary outperformed the group which only received input in the post-test, (b within the experimental group, feedback played a greater role in learners’ better performance than output, (c also a positive correlation between the amount of feedback an individual learner received, and his overall performance in the post-test; and also between the amount of feedback given for a specific word and the correct responses given to its related item in the post-test was found. The findings of this study provide evidence for the role of output production along with receiving corrective feedback in enhancing L2 processing by drawing further L2 learners’ attention to their output which in turn may result in improving their receptive acquisition of L2 words. Furthermore, as the results suggested, feedback made a more contribution to L2 development than output. Keywords: comprehensible input, output, interaction, corrective feedback, modified output, receptive vocabulary acquisition
Khia Anne Johnson
Full Text Available Learners often struggle with L2 sounds, yet little is known about the role of prior pronunciation knowledge and explicit articulatory training in language acquisition. This study asks if existing pronunciation knowledge can bootstrap word learning, and whether short-term audiovisual articulatory training for tongue position with and without a production component has an effect on lexical retention. Participants were trained and tested on stimuli with perceptually salient segments that are challenging to produce. Results indicate that pronunciation knowledge plays an important role in word learning. While much about the extent and shape of this role remains unclear, this study sheds light in three main areas. First, prior pronunciation knowledge leads to increased accuracy in word learning, as all groups trended toward lower accuracy on pseudowords with two novel segments, when compared with those with one or none. Second, all training and control conditions followed similar patterns, with training neither aiding nor inhibiting retention; this is a noteworthy result as previous work has found that the inclusion of production in training leads to decreased performance when testing for retention. Finally, higher production accuracy during practice led to higher retention after the word-learning task, indicating that individual differences and successful training are potentially important indicators of retention. This study provides support for the claim that pronunciation matters in L2 word learning.
Singh, Leher; Foong, Joanne
Infants' abilities to discriminate native and non-native phonemes have been extensively investigated in monolingual learners, demonstrating a transition from language-general to language-specific sensitivities over the first year after birth. However, these studies have mostly been limited to the study of vowels and consonants in monolingual learners. There is relatively little research on other types of phonetic segments, such as lexical tone, even though tone languages are very well represented across languages of the world. The goal of the present study is to investigate how Mandarin Chinese-English bilingual learners contend with non-phonemic pitch variation in English spoken word recognition. This is contrasted with their treatment of phonemic changes in lexical tone in Mandarin spoken word recognition. The experimental design was cross-sectional and three age-groups were sampled (7.5months, 9months and 11months). Results demonstrated limited generalization abilities at 7.5months, where infants only recognized words in English when matched in pitch and words in Mandarin that were matched in tone. At 9months, infants recognized words in Mandarin Chinese that matched in tone, but also falsely recognized words that contrasted in tone. At this age, infants also recognized English words whether they were matched or mismatched in pitch. By 11months, infants correctly recognized pitch-matched and - mismatched words in English but only recognized tonal matches in Mandarin Chinese. Copyright © 2012 Elsevier B.V. All rights reserved.
Full Text Available This paper reports on a qualitative discourse analysis of 290 tokens of (the same occurring in spoken American English. Our study of these naturally occurring tokens extends and elaborates on the analysis of this expression that was proposed by Halliday and Hasan (l976. We also review other prior research on (the same in our attempt to provide data-based answers to the following three questions: (1 under what conditions is the definite article the obligatory or optional with same? (2 what are the head nouns that typically follow same and why is there sometimes no head noun? (3 what type(s of cohesive relationships can (the same signal in spoken English discourse? Finally, we explore some typical pedagogical treatments of (the same in current ESL/EFL textbooks and reference grammars. Then we make our own suggestions regarding how teachers of English as a second or foreign language might go about presenting this useful expression to their learners. Este estudo apresenta uma análise qualitativa do discurso de 290 ocorrências de (the same no Inglês Americano falado. Nosso estudo sobre essas ocorrências naturais amplia e elabora a análise desta expressão que foi proposta por Halliday e Hassan (1976. Também revisamos investigações posteriores sobre (the same com o intuito de fornecer respostas fundamentadas em um banco de dados para as três seguintes perguntas: (1 em quais condições o artigo definido (the é obrigatório ou opcional juntamente a same? (2 quais são os principais substantivos que tipicamente seguem same e por que, às vezes, não há substantivo? (3 que tipo(s de relações coesivas pode (the same indicar no discurso do Inglês falado? Finalmente, exploramos alguns tratamentos pedagógicos típicos de (the same nos atuais livros-texto e gramáticas de Inglês – L2/LE. Em seguida, sugerimos como os professores de Inglês, como segunda língua ou língua estrangeira, poderiam ensinar essa útil expressão para seus
In this article I discuss the contributions to this special issue of Language Learning on orders and sequences in second language (L2) development. Using a list of questions, I attempt to characterize what I see as the strengths, limitations, and unresolved issues in the approaches to L2 development
Yaguchi, Hiroaki; Yabe, Ichiro; Takahashi, Hidehisa; Watanabe, Masashi; Nomura, Taichi; Kano, Takahiro; Matsumoto, Masaki; Nakayama, Keiichi I; Watanabe, Masahiko; Hatakeyama, Shigetsugu
Increasing evidence shows that immune-mediated mechanisms may contribute to the pathogenesis of central nervous system disorders including cerebellar ataxias, as indicated by the aberrant production of neuronal surface antibodies. We previously reported a patient with cerebellar ataxia associated with production of a new anti-neuronal antibody, anti-seizure-related 6 homolog like 2 (Sez6l2). Sez6l2 is a type 1 membrane protein that is highly expressed in the hippocampus and cerebellar cortex and mice lacking Sez6l2 protein family members develop ataxia. Here we used a proteomics-based approach to show that serum derived from this patient recognizes the extracellular domain of Sez6l2 and that Sez6l2 protein binds to both adducin (ADD) and glutamate receptor 1 (GluR1). Our results indicate that Sez6l2 is one of the auxiliary subunits of the AMPA receptor and acts as a scaffolding protein to link GluR1 to ADD. Furthermore, Sez6l2 overexpression upregulates ADD phosphorylation, whereas siRNA-mediated downregulation of Sez612 prevents ADD phosphorylation, suggesting that Sez6l2 modulates AMPA-ADD signal transduction. Copyright © 2017 Elsevier Inc. All rights reserved.
Hudson, Thom; Llosa, Lorena
Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…
Jelena Mihaljeviđ Djigunoviđ
Full Text Available In this qualitative study the author focuses on age effects on young learners’ L2 development by comparing the L2 learning processes of six young learners in an instructed setting: three who had started learning English as L2 at age 6/7 and three who had started at age 9/10. Both earlier and later young beginners were followed for three years (during their second, third and fourth year of learning English. The participants’ L2 development was measured through their oral output elicited by a two-part speaking task administered each year. Results of the analyses are interpreted taking into account each learners’ individual characteristics (learning ability, attitudes and motivation, self-concept and the characteristics of the context in which they were learning their L2 (attitudes of school staff and parents to early L2 learning, home support, in-class and out-of-class exposure to L2, socio-economic status. The findings show that earlier and later young beginners follow different trajectories in their L2 learning, which reflects different interactions which age enters into with the other variables.
This study examines the influence of experience with a second language (L2) on the perception of phonological contrasts in a third language (L3). This study contributes to L3 phonology by examining the influence of L2 phonological perception abilities on the perception of an L3 at the beginner level. Participants were native speakers of Korean…
of the L2. This paper reports preliminary data collected from 9 beginner learners of Afrikaans ... Investigating 'Full Transfer': Preliminary Data From The Adult L2 Acquisition of Afrikaans 3 split up into AgrSP, TP and ..... manipulation task, a grammaticality judgment task, and a short truth-value judgment task. (Examples of the ...
Petersen, Henrik Densing
We introduce a notion of L2-Betti numbers for locally compact, second countable, unimodular groups. We study the relation to the standard notion of L2-Betti numbers of countable discrete groups for lattices. In this way, several new computations are obtained for countable groups, including lattices...
Full Text Available The syncretic cultural identity, in which the Spanish language and its linguistic hegemony are grounded and have reflected its political hegemony, led Spain to take on a predominant role in standardization over the centuries. However, the present situation of the Spanish language is characterized by pluricentrism covering the vast territory in which the language is spoken. This means that a number of cen- ters have set up prestigious standard models providing norms for a country or region. Therefore, a fair enactment of this polycentrism requires national norms, different ways of codification of the Spanish language, answers to geographical and social forms which have split after a common departure and the idea that the varieties of the Spanish language fulfill speakers’ different expressive requirements and help to enhance national identities, in the face of the domination of the peninsular model. This point of view must guide linguistic research, methodology in lexicography, school grammar, translation of foreign languages and especially the teaching of Spanish as L2.
Marks, Ian; Stokes, Stephanie F
Children with word-finding difficulties manifest a high frequency of word-finding characteristics in narrative, yet word-finding interventions have concentrated on single-word treatments and outcome measures. This study measured the effectiveness of a narrative-based intervention in improving single-word picture-naming and word-finding characteristics in narrative in a case study. A case study, quasi-experimental design was employed. The participant was tested on picture naming and spoken word to picture matching on control and treatment words at pre-, mid-, and post-therapy and an 8-month maintenance point. Narrative samples at pre- and post-therapy were analysed for word-finding characteristics and language production. A narrative-based language intervention for word-finding difficulties (NBLI-WF) was carried out for eight sessions, over 3 weeks. The data were subjected to a repeated-measures trend analysis for dichotomous data. Significant improvement occurred for naming accuracy of treatment, but not for control words. The pattern of word-finding characteristics in narrative changed, but the frequency did not reduce. NBLI-WF was effective in improving naming accuracy in this single case, but there were limitations to the research. Further research is required to assess the changes that may occur in language production and word-finding characteristics in narrative. Community clinicians are encouraged to refine clinical practice to ensure clinical research meets quality indicators.
Louwerse, Max; Qu, Zhan
It is assumed linguistic symbols must be grounded in perceptual information to attain meaning, because the sound of a word in a language has an arbitrary relation with its referent. This paper demonstrates that a strong arbitrariness claim should be reconsidered. In a computational study, we showed that one phonological feature (nasals in the beginning of a word) predicted negative valence in three European languages (English, Dutch, and German) and positive valence in Chinese. In three experiments, we tested whether participants used this feature in estimating the valence of a word. In Experiment 1, Chinese and Dutch participants rated the valence of written valence-neutral words, with Chinese participants rating the nasal-first neutral-valence words more positive and the Dutch participants rating nasal-first neutral-valence words more negative. In Experiment 2, Chinese (and Dutch) participants rated the valence of Dutch (and Chinese) written valence-neutral words without being able to understand the meaning of these words. The patterns replicated the valence patterns from Experiment 1. When the written words from Experiment 2 were transformed into spoken words, results in Experiment 3 again showed that participants estimated the valence of words on the basis of the sound of the word. The computational study and psycholinguistic experiments indicated that language users can bootstrap meaning from the sound of a word.
Kartushina, Natalia; Frauenfelder, Ulrich H.
The speech of late second language (L2) learners is generally marked by an accent. The dominant theoretical perspective attributes accents to deficient L2 perception arising from a transfer of L1 phonology, which is thought to influence L2 perception and production. In this study we evaluate the explanatory role of L2 perception in L2 production and explore alternative explanations arising from the L1 phonological system, such as for example, the role of L1 production. Specifically we examine the role of an individual’s L1 productions in the production of L2 vowel contrasts. Fourteen Spanish adolescents studying French at school were assessed on their perception and production of the mid-close/mid-open contrasts, /ø-œ/ and /e-ε/, which are, respectively, acoustically distinct from Spanish sounds, or similar to them. The participants’ native productions were explored to assess (1) the variability in the production of native vowels (i.e., the compactness of vowel categories in F1/F2 acoustic space), and (2) the position of the vowels in the acoustic space. The results revealed that although poorly perceived contrasts were generally produced poorly, there was no correlation between individual performance in perception and production, and no effect of L2 perception on L2 production in mixed-effects regression analyses. This result is consistent with a growing body of psycholinguistic and neuroimaging research that suggest partial dissociations between L2 perception and production. In contrast, individual differences in the compactness and position of native vowels predicted L2 production accuracy. These results point to existence of surface transfer of individual L1 phonetic realizations to L2 space and demonstrate that pre-existing features of the native space in production partly determine how new sounds can be accommodated in that space. PMID:25414678