WorldWideScience

Sample records for spoken l2 words

  1. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  2. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  3. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  4. Towards Affordable Disclosure of Spoken Word Archives

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; Heeren, W.F.L.; Huijbregts, M.A.H.; Hiemstra, Djoerd; de Jong, Franciska M.G.; Larson, M; Fernie, K; Oomen, J; Cigarran, J.

    2008-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken word archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, the least we want to be

  5. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  6. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  7. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  8. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  9. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  10. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    Science.gov (United States)

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  11. Attention to spoken word planning: Chronometric and neuroimaging evidence

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,

  12. Comparison of Word Intelligibility in Spoken and Sung Phrases

    Directory of Open Access Journals (Sweden)

    Lauren B. Collister

    2008-09-01

    Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.

  13. Talker and background noise specificity in spoken word recognition memory

    OpenAIRE

    Cooper, Angela; Bradlow, Ann R.

    2017-01-01

    Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...

  14. Interference of spoken word recognition through phonological priming from visual objects and printed words

    NARCIS (Netherlands)

    McQueen, J.M.; Hüttig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase

  15. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    Science.gov (United States)

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  16. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    Science.gov (United States)

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  17. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  18. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    data of the corpus and includes more formal audio material (lectures, TV and ... meticulous word-class tagging of nouns, adjectives, verbs etc., this book is not limited to word ... Fréquences d'utilisation des mots en français écrit contemporain.

  19. Interference of spoken word recognition through phonological priming from visual objects and printed words

    OpenAIRE

    McQueen, J.; Huettig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of pre-exposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g...

  20. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    Science.gov (United States)

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  1. Rapid modulation of spoken word recognition by visual primes.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  2. Allophones, not phonemes in spoken-word recognition

    NARCIS (Netherlands)

    Mitterer, H.A.; Reinisch, E.; McQueen, J.M.

    2018-01-01

    What are the phonological representations that listeners use to map information about the segmental content of speech onto the mental lexicon during spoken-word recognition? Recent evidence from perceptual-learning paradigms seems to support (context-dependent) allophones as the basic

  3. The Impact of Orthographic Consistency on German Spoken Word Identification

    Science.gov (United States)

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…

  4. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    Science.gov (United States)

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  5. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    Science.gov (United States)

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  6. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  7. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    gogy. Cambridge: Cambridge University Press. Cowie, A.P. 1998. Phraseology: Theory, Analysis, Applications. Oxford: The Clarendon Press. Cowie, A.P. 1999. English Dictionaries for Foreign Learners. Oxford: The Clarendon Press. Read, J. and M. Ambrose, M. 1998. Towards a Multilingual Dictionary of Academic Words.

  8. L2 Word Recognition: Influence of L1 Orthography on Multi-Syllabic Word Recognition

    Science.gov (United States)

    Hamada, Megumi

    2017-01-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…

  9. The Activation of Embedded Words in Spoken Word Recognition

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G.

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407

  10. The Activation of Embedded Words in Spoken Word Recognition.

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster ) indexed activation of the embedded words (e.g., ham ). When the listening conditions were optimal, isolated embedded words (e.g., ham ) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster ), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions.

  11. MANIPULATING L2 LEARNERS' ONLINE DICTIONARY USE AND ITS EFFECT ON L2 WORD RETENTION

    Directory of Open Access Journals (Sweden)

    Elke Peters

    2007-02-01

    Full Text Available This study explored the effect of two enhancement techniques on L2 learners' look-up behaviour during a reading task and word retention afterwards amongst Flemish learners of German: a Vocabulary Test Announcement and Task-induced Word Relevance. Eighty-four participants were recruited for this study. They were randomly assigned to one of two groups: 1 not forewarned of an upcoming vocabulary test (incidental condition or 2 forewarned of a vocabulary test (intentional condition. Task-induced Word Relevance was operationalized by a reading comprehension task. The relevance factor comprised two levels: plus-relevant and minus-relevant target words. Plus-relevant words needed to be looked up and used receptively in order to answer the comprehension questions. In other words, the reading comprehension task could not be accomplished without knowing the meaning of the plus-relevant words. The minus-relevant target words, on the other hand, were not linked to the reading comprehension questions. Our findings show a significant effect of Test Announcement and Word Relevance on whether a target word is looked up. In addition, Word Relevance also affects the frequency of clicks on target words. Word retention is only influenced by Task-induced Word Relevance. The effect of Word Relevance is durable.

  12. Semantic Ambiguity Effects in L2 Word Recognition.

    Science.gov (United States)

    Ishida, Tomomi

    2018-06-01

    The present study examined the ambiguity effects in second language (L2) word recognition. Previous studies on first language (L1) lexical processing have observed that ambiguous words are recognized faster and more accurately than unambiguous words on lexical decision tasks. In this research, L1 and L2 speakers of English were asked whether a letter string on a computer screen was an English word or not. An ambiguity advantage was found for both groups and greater ambiguity effects were found for the non-native speaker group when compared to the native speaker group. The findings imply that the larger ambiguity advantage for L2 processing is due to their slower response time in producing adequate feedback activation from the semantic level to the orthographic level.

  13. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  14. Talker and background noise specificity in spoken word recognition memory

    Directory of Open Access Journals (Sweden)

    Angela Cooper

    2017-11-01

    Full Text Available Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition and background noise (noise condition using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.

  15. Accounting for L2 learners’ errors in word stress placement

    Directory of Open Access Journals (Sweden)

    Clara Herlina Karjo

    2016-01-01

    Full Text Available Stress placement in English words is governed by highly complicated rules. Thus, assigning stress correctly in English words has been a challenging task for L2 learners, especially Indonesian learners since their L1 does not recognize such stress system. This study explores the production of English word stress by 30 university students. The method used for this study is immediate repetition task. Participants are instructed to identify the stress placement of 80 English words which are auditorily presented as stimuli and immediately repeat the words with correct stress placement. The objectives of this study are to find out whether English word stress placement is problematic for L2 learners and to investigate the phonological factors which account for these problems. Research reveals that L2 learners have different ability in producing the stress, but three-syllable words are more problematic than two-syllable words. Moreover, misplacement of stress is caused by, among others, the influence of vowel lenght and vowel height.

  16. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    Science.gov (United States)

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  17. Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors

    Science.gov (United States)

    Kalyuga, Slava

    2012-01-01

    Spoken words have always been an important component of traditional instruction. With the development of modern educational technology tools, spoken text more often replaces or supplements written or on-screen textual representations. However, there could be a cognitive load cost involved in this trend, as spoken words can have both benefits and…

  18. Influence of syllable structure on L2 auditory word learning.

    Science.gov (United States)

    Hamada, Megumi; Goya, Hideki

    2015-04-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.

  19. Negative Transfer Effects on L2 Word Order Processing.

    Science.gov (United States)

    Erdocia, Kepa; Laka, Itziar

    2018-01-01

    Does first language (L1) word order affect the processing of non-canonical but grammatical syntactic structures in second language (L2) comprehension? In the present study, we test whether L1-Spanish speakers of L2-Basque process subject-verb-object (SVO) and object-verb-subject (OVS) non-canonical word order sentences of Basque in the same way as Basque native speakers. Crucially, while OVS orders are non-canonical in both Spanish and Basque, SVO is non-canonical in Basque but is the canonical word order in Spanish. Our electrophysiological results showed that the characteristics of L1 affect the processing of the L2 even at highly proficient and early-acquired bilingual populations. Specifically, in the non-native group, we observed a left anterior negativity-like component when comparing S and O at sentence initial position and a P600 when comparing those elements at sentence final position. Those results are similar of those reported by Casado et al. (2005) for native speakers of Spanish indicating that L2-Basque speakers rely in their L1-Spanish when processing SVO-OVS word order sentences. Our results favored the competition model (MacWhinney, 1997).

  20. Negative Transfer Effects on L2 Word Order Processing

    Directory of Open Access Journals (Sweden)

    Kepa Erdocia

    2018-03-01

    Full Text Available Does first language (L1 word order affect the processing of non-canonical but grammatical syntactic structures in second language (L2 comprehension? In the present study, we test whether L1-Spanish speakers of L2-Basque process subject–verb–object (SVO and object–verb–subject (OVS non-canonical word order sentences of Basque in the same way as Basque native speakers. Crucially, while OVS orders are non-canonical in both Spanish and Basque, SVO is non-canonical in Basque but is the canonical word order in Spanish. Our electrophysiological results showed that the characteristics of L1 affect the processing of the L2 even at highly proficient and early-acquired bilingual populations. Specifically, in the non-native group, we observed a left anterior negativity-like component when comparing S and O at sentence initial position and a P600 when comparing those elements at sentence final position. Those results are similar of those reported by Casado et al. (2005 for native speakers of Spanish indicating that L2-Basque speakers rely in their L1-Spanish when processing SVO–OVS word order sentences. Our results favored the competition model (MacWhinney, 1997.

  1. V2 word order in subordinate clauses in spoken Danish

    DEFF Research Database (Denmark)

    Jensen, Torben Juel; Christensen, Tanya Karoli

    are asymmetrically distributed, we argue that the word order difference should rather be seen as a signal of (subtle) semantic differences. In main clauses, V3 is highly marked in comparison to V2, and occurs in what may be called emotives. In subordinate clauses, V2 is marked and signals what has been called...... ”assertiveness”, but is rather a question of foregrounding (cf. Simons 2007: Main Point of Utterance). The paper presents the results of a study of word order in subordinate clauses in contemporary spoken Danish and focuses on how to include the proposed semantic difference as a factor influencing the choice...... studies of two age cohorts of speakers in Copenhagen, recorded in the 1980s and again in 2005-07, and on recent recordings with two age cohorts of speakers from the western part of Jutland. This makes it possible to study variation and change with respect to word order in subordinate clauses in both real...

  2. The role of grammatical category information in spoken word retrieval.

    Science.gov (United States)

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production.

  3. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  4. Partial Word Knowledge: Frontier Words in the L2 Mental Lexicon

    Science.gov (United States)

    Zareva, Alla

    2012-01-01

    The study set out to examine the partial word knowledge of native speakers, L2 advanced, and intermediate learners of English with regard to four word features from Richards' (1976) taxonomy of aspects describing what knowing a word entails. To capture partial familiarity, the participants completed in writing a test containing low and mid…

  5. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  6. The gender congruency effect during bilingual spoken-word recognition

    Science.gov (United States)

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  7. "Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson

    Science.gov (United States)

    Xerri, Daniel

    2016-01-01

    Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…

  8. "A Unified Poet Alliance": The Personal and Social Outcomes of Youth Spoken Word Poetry Programming

    Science.gov (United States)

    Weinstein, Susan

    2010-01-01

    This article places youth spoken word (YSW) poetry programming within the larger framework of arts education. Drawing primarily on transcripts of interviews with teen poets and adult teaching artists and program administrators, the article identifies specific benefits that participants ascribe to youth spoken word, including the development of…

  9. Toddlers' sensitivity to within-word coarticulation during spoken word recognition: Developmental differences in lexical competition.

    Science.gov (United States)

    Zamuner, Tania S; Moore, Charlotte; Desmeules-Trudel, Félix

    2016-12-01

    To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  11. Interaction in Spoken Word Recognition Models: Feedback Helps

    Science.gov (United States)

    Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593

  12. Interaction in Spoken Word Recognition Models: Feedback Helps.

    Science.gov (United States)

    Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  13. Interaction in Spoken Word Recognition Models: Feedback Helps

    Directory of Open Access Journals (Sweden)

    James S. Magnuson

    2018-04-01

    Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  14. L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.

    Science.gov (United States)

    Hamada, Megumi

    2017-10-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.

  15. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  16. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    Science.gov (United States)

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  17. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    Science.gov (United States)

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  18. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  19. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  20. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    Science.gov (United States)

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  1. Orthographic consistency affects spoken word recognition at different grain-sizes

    DEFF Research Database (Denmark)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previo...

  2. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    Science.gov (United States)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  3. The Effect of Lexical Frequency on Spoken Word Recognition in Young and Older Listeners

    Science.gov (United States)

    Revill, Kathleen Pirog; Spieler, Daniel H.

    2011-01-01

    When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults’ eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age. PMID:21707175

  4. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    OpenAIRE

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2011-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were la...

  5. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  6. Investigating an Innovative Computer Application to Improve L2 Word Recognition from Speech

    Science.gov (United States)

    Matthews, Joshua; O'Toole, John Mitchell

    2015-01-01

    The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…

  7. Discourse context and the recognition of reduced and canonical spoken words

    OpenAIRE

    Brouwer, S.; Mitterer, H.; Huettig, F.

    2013-01-01

    In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" ...

  8. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  9. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  10. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  11. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  12. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    Science.gov (United States)

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  13. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Embodiment and second-language: automatic activation of motor responses during processing spatially associated L2 words and emotion L2 words in a vertical Stroop paradigm.

    Science.gov (United States)

    Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2014-05-01

    Converging evidence suggests that understanding our first-language (L1) results in reactivation of experiential sensorimotor traces in the brain. Surprisingly, little is known regarding the involvement of these processes during second-language (L2) processing. Participants saw L1 or L2 words referring to entities with a typical location (e.g., star, mole) (Experiment 1 & 2) or to an emotion (e.g., happy, sad) (Experiment 3). Participants responded to the words' ink color with an upward or downward arm movement. Despite word meaning being fully task-irrelevant, L2 automatically activated motor responses similar to L1 even when L2 was acquired rather late in life (age >11). Specifically, words such as star facilitated upward, and words such as root facilitated downward responses. Additionally, words referring to positive emotions facilitated upward, and words referring to negative emotions facilitated downward responses. In summary our study suggests that reactivation of experiential traces is not limited to L1 processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Word Boundaries in L2 Speech: Evidence from Polish Learners of English

    Science.gov (United States)

    Schwartz, Geoffrey

    2016-01-01

    Acoustic and perceptual studies investgate B2-level Polish learners' acquisition of second language (L2) English word-boundaries involving word-initial vowels. In production, participants were less likely to produce glottalization of phrase-medial initial vowels in L2 English than in first language (L1) Polish. Perception studies employing word…

  16. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  17. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    Science.gov (United States)

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  18. The Effects of Word Exposure Frequency and Elaboration of Word Processing on Incidental L2 Vocabulary Acquisition through Reading

    Science.gov (United States)

    Eckerth, Johannes; Tavakoli, Parveneh

    2012-01-01

    Research on incidental second language (L2) vocabulary acquisition through reading has claimed that repeated encounters with unfamiliar words and the relative elaboration of processing these words facilitate word learning. However, so far both variables have been investigated in isolation. To help close this research gap, the current study…

  19. Competition in the perception of spoken Japanese words

    NARCIS (Netherlands)

    Otake, T.; McQueen, J.M.; Cutler, A.

    2010-01-01

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors,

  20. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2012-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836

  2. Attention demands of spoken word planning: A review

    Directory of Open Access Journals (Sweden)

    Ardi eRoelofs

    2011-11-01

    Full Text Available Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot proceed without paying some form of attention. Here, we review evidence that word planning requires some but not full attention. The evidence comes from chronometric studies of word planning in picture naming and word reading under divided attention conditions. It is generally assumed that the central attention demands of a process are indexed by the extent that the process delays the performance of a concurrent unrelated task. The studies measured the speed and accuracy of linguistic and nonlinguistic responding as well as eye gaze durations reflecting the allocation of attention. First, empirical evidence indicates that in several task situations, processes up to and including phonological encoding in word planning delay, or are delayed by, the performance of concurrent unrelated nonlinguistic tasks. These findings suggest that word planning requires central attention. Second, empirical evidence indicates that conflicts in word planning may be resolved while concurrently performing an unrelated nonlinguistic task, making a task decision, or making a go/no-go decision. These findings suggest that word planning does not require full central attention. We outline a computationally implemented theory of attention and word planning, and describe at various points the outcomes of computer simulations that demonstrate the utility of the theory in accounting for the key findings. Finally, we indicate how attention deficits may contribute to impaired language performance, such as in individuals with specific language impairment.

  3. Intentional and Reactive Inhibition during Spoken-Word Stroop Task Performance in People with Aphasia

    Science.gov (United States)

    Pompon, Rebecca Hunting; McNeil, Malcolm R.; Spencer, Kristie A.; Kendall, Diane L.

    2015-01-01

    Purpose: The integrity of selective attention in people with aphasia (PWA) is currently unknown. Selective attention is essential for everyday communication, and inhibition is an important part of selective attention. This study explored components of inhibition--both intentional and reactive inhibition--during spoken-word production in PWA and in…

  4. Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder

    Science.gov (United States)

    Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay

    2014-01-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…

  5. Guide to Spoken-Word Recordings: Popular Literature. Reference Circular No. 95-01.

    Science.gov (United States)

    Library of Congress, Washington, DC. National Library Service for the Blind and Physically Handicapped.

    This reference circular contains selected sources for the purchase, rental, or loan of fiction and nonfiction spoken-word recordings. The sources in sections 1, 2, and 3 are commercial and, unless otherwise noted, offer abridged and unabridged titles on audio cassette. Sources in section 1 make available popular fiction; classics; poetry; drama;…

  6. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    Science.gov (United States)

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  7. Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach

    OpenAIRE

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy

    2012-01-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual varia...

  8. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    OpenAIRE

    Jesse, A.; McQueen, J.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker...

  9. Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.

    Science.gov (United States)

    Hunter, Cynthia R; Pisoni, David B

    Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low

  10. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. The socially weighted encoding of spoken words: a dual-route approach to speech perception.

    Science.gov (United States)

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B

    2013-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  12. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  13. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    Science.gov (United States)

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  14. Spoken word production: A theory of lexical access

    NARCIS (Netherlands)

    Levelt, W.J.M.

    2001-01-01

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker's focusing on a target concept and ending with the initiation of articulation. The initial

  15. Attention demands of spoken word planning: A review

    NARCIS (Netherlands)

    Roelofs, A.P.A.; Piai, V.

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot

  16. Event-related potentials reflecting the frequency of unattended spoken words

    DEFF Research Database (Denmark)

    Shtyrov, Yury; Kimppa, Lilli; Pulvermüller, Friedemann

    2011-01-01

    , in passive non-attend conditions, with acoustically matched high- and low-frequency words along with pseudo-words. Using factorial and correlation analyses, we found that already at ~120 ms after the spoken stimulus information was available, amplitude of brain responses was modulated by the words' lexical...... for the most frequent word stimuli, later-on (~270 ms), a more global lexicality effect with bilateral perisylvian sources was found for all stimuli, suggesting faster access to more frequent lexical entries. Our results support the account of word memory traces as interconnected neuronal circuits, and suggest......How are words represented in the human brain and can these representations be qualitatively assessed with respect to their structure and properties? Recent research demonstrates that neurophysiological signatures of individual words can be measured when subjects do not focus their attention...

  17. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  18. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Science.gov (United States)

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  19. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-05-16

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements.

  20. Spectrotemporal processing drives fast access to memory traces for spoken words.

    Science.gov (United States)

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Children's Spoken Word Recognition and Contributions to Phonological Awareness and Nonword Repetition: A 1-Year Follow-Up

    Science.gov (United States)

    Metsala, Jamie L.; Stavrinos, Despina; Walley, Amanda C.

    2009-01-01

    This study examined effects of lexical factors on children's spoken word recognition across a 1-year time span, and contributions to phonological awareness and nonword repetition. Across the year, children identified words based on less input on a speech-gating task. For word repetition, older children improved for the most familiar words. There…

  2. The Self-Organization of a Spoken Word

    Science.gov (United States)

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  3. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    Science.gov (United States)

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Conducting spoken word recognition research online: Validation and a new timing method.

    Science.gov (United States)

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  5. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  7. Rapid L2 Word Learning through High Constraint Sentence Context: An Event-Related Potential Study

    Directory of Open Access Journals (Sweden)

    Baoguo Chen

    2017-12-01

    Full Text Available Previous studies have found quantity of exposure, i.e., frequency of exposure (Horst et al., 1998; Webb, 2008; Pellicer-Sánchez and Schmitt, 2010, is important for second language (L2 contextual word learning. Besides this factor, context constraint and L2 proficiency level have also been found to affect contextual word learning (Pulido, 2003; Tekmen and Daloglu, 2006; Elgort et al., 2015; Ma et al., 2015. In the present study, we adopted the event-related potential (ERP technique and chose high constraint sentences as reading materials to further explore the effects of quantity of exposure and proficiency on L2 contextual word learning. Participants were Chinese learners of English with different English proficiency levels. For each novel word, there were four high constraint sentences with the critical word at the end of the sentence. Learners read sentences and made semantic relatedness judgment afterwards, with ERPs recorded. Results showed that in the high constraint condition where each pseudoword was embedded in four sentences with consistent meaning, N400 amplitude upon this pseudoword decreased significantly as learners read the first two sentences. High proficiency learners responded faster in the semantic relatedness judgment task. These results suggest that in high quality sentence contexts, L2 learners could rapidly acquire word meaning without multiple exposures, and L2 proficiency facilitated this learning process.

  8. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  9. The influence of talker and foreign-accent variability on spoken word identification.

    Science.gov (United States)

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  10. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  11. Working memory affects older adults' use of context in spoken-word recognition.

    Science.gov (United States)

    Janse, Esther; Jesse, Alexandra

    2014-01-01

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

  12. The Effects of Listener's Familiarity about a Talker on the Free Recall Task of Spoken Words

    Directory of Open Access Journals (Sweden)

    Chikako Oda

    2011-10-01

    Full Text Available Several recent studies have examined an interaction between talker's acoustic characteristics and spoken word recognition in speech perception and have shown that listener's familiarity about a talker influences an easiness of spoken word processing. The present study examined the effect of listener's familiarity about talkers on the free recall task of words spoken by two talkers. Subjects participated in three conditions of the task: the listener has (1 explicit knowledge, (2 implicit knowledge, and (3 no knowledge of the talker. In condition (1, subjects were familiar with talker's voices and were initially informed whose voices they would hear. In condition (2, subjects were familiar with talkers' voices but were not informed whose voices they would hear. In condition (3, subjects were entirely unfamiliar with talker's voices and were not informed whose voices they would hear. We analyzed the percentage of correct answers and compared these results across three conditions. We will discuss the possibility of whether a listener's knowledge about the individual talker's acoustic characteristics stored in long term memory could reduce the quantity of the cognitive resources required in the verbal information processing.

  13. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  14. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  15. An fMRI study of concreteness effects in spoken word recognition.

    Science.gov (United States)

    Roxbury, Tracy; McMahon, Katie; Copland, David A

    2014-09-30

    Evidence for the brain mechanisms recruited when processing concrete versus abstract concepts has been largely derived from studies employing visual stimuli. The tasks and baseline contrasts used have also involved varying degrees of lexical processing. This study investigated the neural basis of the concreteness effect during spoken word recognition and employed a lexical decision task with a novel pseudoword condition. The participants were seventeen healthy young adults (9 females). The stimuli consisted of (a) concrete, high imageability nouns, (b) abstract, low imageability nouns and (c) opaque legal pseudowords presented in a pseudorandomised, event-related design. Activation for the concrete, abstract and pseudoword conditions was analysed using anatomical regions of interest derived from previous findings of concrete and abstract word processing. Behaviourally, lexical decision reaction times for the concrete condition were significantly faster than both abstract and pseudoword conditions and the abstract condition was significantly faster than the pseudoword condition (p word recognition. Significant activity was also elicited by concrete words relative to pseudowords in the left fusiform and left anterior middle temporal gyrus. These findings confirm the involvement of a widely distributed network of brain regions that are activated in response to the spoken recognition of concrete but not abstract words. Our findings are consistent with the proposal that distinct brain regions are engaged as convergence zones and enable the binding of supramodal input.

  16. Long-term temporal tracking of speech rate affects spoken-word recognition.

    Science.gov (United States)

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  17. An fMRI study of concreteness effects during spoken word recognition in aging. Preservation or attenuation?

    Directory of Open Access Journals (Sweden)

    Tracy eRoxbury

    2016-01-01

    Full Text Available It is unclear whether healthy aging influences concreteness effects (ie. the processing advantage seen for concrete over abstract words and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete versus abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing.

  18. The influence of orthographic experience on the development of phonological preparation in spoken word production.

    Science.gov (United States)

    Li, Chuchu; Wang, Min

    2017-08-01

    Three sets of experiments using the picture naming tasks with the form preparation paradigm investigated the influence of orthographic experience on the development of phonological preparation unit in spoken word production in native Mandarin-speaking children. Participants included kindergarten children who have not received formal literacy instruction, Grade 1 children who are comparatively more exposed to the alphabetic pinyin system and have very limited Chinese character knowledge, Grades 2 and 4 children who have better character knowledge and more exposure to characters, and skilled adult readers who have the most advanced character knowledge and most exposure to characters. Only Grade 1 children showed the form preparation effect in the same initial consonant condition (i.e., when a list of target words shared the initial consonant). Both Grade 4 children and adults showed the preparation effect when the initial syllable (but not tone) among target words was shared. Kindergartners and Grade 2 children only showed the preparation effect when the initial syllable including tonal information was shared. These developmental changes in phonological preparation could be interpreted as a joint function of the modification of phonological representation and attentional shift. Extensive pinyin experience encourages speakers to attend to and select onset phoneme in phonological preparation, whereas extensive character experience encourages speakers to prepare spoken words in syllables.

  19. The time course of lexical competition during spoken word recognition in Mandarin Chinese: an event-related potential study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen

    2016-01-20

    The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.

  20. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  1. Effects of lexical competition on immediate memory span for spoken words.

    Science.gov (United States)

    Goh, Winston D; Pisoni, David B

    2003-08-01

    Current theories and models of the structural organization of verbal short-term memory are primarily based on evidence obtained from manipulations of features inherent in the short-term traces of the presented stimuli, such as phonological similarity. In the present study, we investigated whether properties of the stimuli that are not inherent in the short-term traces of spoken words would affect performance in an immediate memory span task. We studied the lexical neighbourhood properties of the stimulus items, which are based on the structure and organization of words in the mental lexicon. The experiments manipulated lexical competition by varying the phonological neighbourhood structure (i.e., neighbourhood density and neighbourhood frequency) of the words on a test list while controlling for word frequency and intra-set phonological similarity (family size). Immediate memory span for spoken words was measured under repeated and nonrepeated sampling procedures. The results demonstrated that lexical competition only emerged when a nonrepeated sampling procedure was used and the participants had to access new words from their lexicons. These findings were not dependent on individual differences in short-term memory capacity. Additional results showed that the lexical competition effects did not interact with proactive interference. Analyses of error patterns indicated that item-type errors, but not positional errors, were influenced by the lexical attributes of the stimulus items. These results complement and extend previous findings that have argued for separate contributions of long-term knowledge and short-term memory rehearsal processes in immediate verbal serial recall tasks.

  2. L2 Learners' Recognition of Unfamiliar Idioms Composed of Familiar Words

    Science.gov (United States)

    Kim, Choonkyong

    2016-01-01

    Most second language (L2) learners are aware of the importance of vocabulary, and this awareness usually directs their attention to learning new words. By contrast, learners do not often recognise unfamiliar idioms if all the compositional parts look familiar to them such as "turn the corner" or "carry the day." College-level…

  3. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  4. Interference Effects on the Recall of Pictures, Printed Words and Spoken Words.

    Science.gov (United States)

    Burton, John K.; Bruning, Roger H.

    Thirty college undergraduates participated in a study of the effects of acoustic and visual interference on the recall of word and picture triads in both short-term and long-term memory. The subjects were presented 24 triads of monosyllabic nouns representing all of the possible combinations of presentation types: pictures, printed words, and…

  5. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    Directory of Open Access Journals (Sweden)

    Michael Inspector

    Full Text Available OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i All words presented in a set flat monotonous pitch contour (ii Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41, temporal areas (BA 21 22 bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  6. How new L2 words (don't) become memories : Lexicalization in advanced L1 Dutch learners of L2 English as part of a longitudinal study

    NARCIS (Netherlands)

    Keijzer, Merel

    It is an undisputed fact that learning – and remembering – new words is key in successful second language acquisition. And yet researching how vocabulary acquisition takes place is one of the most difficult endeavors in second language acquisition. We can test how many L2 words a learner knows, but

  7. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  8. Do word associations assess word knowledge? A comparison of L1 and L2, child and adult word associations

    NARCIS (Netherlands)

    Cremer, M.; Dingshoff, D.; de Beer, M.; Schoonen, R.

    2011-01-01

    Differences in word associations between monolingual and bilingual speakers of Dutch can reflect differences in how well seemingly familiar words are known. In this (exploratory) study mono-and bilingual, child and adult free word associations were compared. Responses of children and of monolingual

  9. Context affects L1 but not L2 during bilingual word recognition: an MEG study.

    Science.gov (United States)

    Pellikka, Janne; Helenius, Päivi; Mäkelä, Jyrki P; Lehtonen, Minna

    2015-03-01

    How do bilinguals manage the activation levels of the two languages and prevent interference from the irrelevant language? Using magnetoencephalography, we studied the effect of context on the activation levels of languages by manipulating the composition of word lists (the probability of the languages) presented auditorily to late Finnish-English bilinguals. We first determined the upper limit time-window for semantic access, and then focused on the preceding responses during which the actual word recognition processes were assumedly ongoing. Between 300 and 500 ms in the temporal cortices (in the N400 m response) we found an asymmetric language switching effect: the responses to L1 Finnish words were affected by the presentation context unlike the responses to L2 English words. This finding suggests that the stronger language is suppressed in an L2 context, supporting models that allow auditory word recognition to be affected by contextual factors and the language system to be subject to inhibitory influence. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Development of brain networks involved in spoken word processing of Mandarin Chinese.

    Science.gov (United States)

    Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R

    2011-08-01

    Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.

  11. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  12. The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition

    Science.gov (United States)

    Scharenborg, Odette; Coumans, Juul M. J.; van Hout, Roeland

    2018-01-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on…

  13. Spoken Word Recognition and Serial Recall of Words from Components in the Phonological Network

    Science.gov (United States)

    Siew, Cynthia S. Q.; Vitevitch, Michael S.

    2016-01-01

    Network science uses mathematical techniques to study complex systems such as the phonological lexicon (Vitevitch, 2008). The phonological network consists of a "giant component" (the largest connected component of the network) and "lexical islands" (smaller groups of words that are connected to each other, but not to the giant…

  14. "Poetry Is Not a Special Club": How Has an Introduction to the Secondary Discourse of Spoken Word Made Poetry a Memorable Learning Experience for Young People?

    Science.gov (United States)

    Dymoke, Sue

    2017-01-01

    This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…

  15. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  16. The role of visual representations during the lexical access of spoken words.

    Science.gov (United States)

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?

    Directory of Open Access Journals (Sweden)

    Martine Coene

    2015-01-01

    Full Text Available This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient’s hearing impairment, to predict a patient’s gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination.

  18. The Spoken Word, the Book and the Image in the Work of Evangelization

    Directory of Open Access Journals (Sweden)

    Jerzy Strzelczyk

    2017-06-01

    Full Text Available Little is known about the ‘material’ equipment of the early missionaries who set out to evangelize pagans and apostates, since the authors of the sources focused mainly on the successes (or failures of the missions. Information concerning the ‘infrastructure’ of missions is rather occasional and of fragmentary nature. The major part in the process of evangelization must have been played by the spoken word preached indirectly or through an interpreter, at least in the areas and milieus remote from the centers of ancient civilization. It could not have been otherwise when coming into contact with communities which did not know the art of reading, still less writing. A little more attention is devoted to the other two media, that is, the written word and the images. The significance of the written word was manifold, and – at least as the basic liturgical books are concerned (the missal, the evangeliary? – the manuscripts were indispensable elements of missionaries’ equipment. In certain circumstances the books which the missionaries had at their disposal could acquire special – even magical – significance, the most comprehensible to the Christianized people (the examples given: the evangeliary of St. Winfried-Boniface in the face of death at the hands of a pagan Frisian, the episode with a manuscript in the story of Anskar’s mission written by Rimbert. The role of the plastic art representations (images during the missions is much less frequently mentioned in the sources. After quoting a few relevant examples (Bede the Venerable, Ermoldus Nigellus, Paul the Deacon, Thietmar of Merseburg, the author also cites an interesting, although not entirely successful, attempt to use drama to instruct the Livonians in the faith while converting them to Christianity, which was reported by Henry of Latvia.

  19. Effects of Multimedia Instruction on L2 Acquisition of High-Level, Low-Frequency English Vocabulary Words

    Science.gov (United States)

    Cho, Euna

    2017-01-01

    The present study examined the effects of multimedia enhancement in video form in addition to textual information on L2 vocabulary instruction for high-level, low-frequency English words among Korean learners of English. Although input-based incidental learning of L2 vocabulary through extensive reading has been conventionally believed to be…

  20. Lexical statistics of competition in L2 versus L1 listening

    NARCIS (Netherlands)

    Cutler, A.

    2005-01-01

    Spoken-word recognition involves multiple activation of alternative word candidates and competition between these alternatives. Phonemic confusions in L2 listening increase the number of potentially active words, thus slowing word recognition by adding competitors. This study used a 70,000-word

  1. How new words (don't) become memories : Lexicalization in advanced L1 Dutch learners of L2 English

    NARCIS (Netherlands)

    Keijzer, Merel

    It is an undisputed fact that learning – and remembering – new words is key in successful second language acquisition. And yet researching how vocabulary acquisition takes place is one of the most difficult endeavors in second language acquisition. We can test how many L2 words a learner knows, but

  2. Lexical and semantic representations of L2 cognate and noncognate words acquisition in children : evidence from two learning methods

    OpenAIRE

    Comesaña, Montserrat; Soares, Ana Paula; Sánchez-Casas, Rosa; Lima, Cátia

    2012-01-01

    How bilinguals represent words in two languages and which mechanisms are responsible for second language acquisition are important questions in the bilingual and vocabulary acquisition literature. This study aims to analyze the effect of two learning methods (picture-based vs. word-based method) and two types of words (cognates and noncognates) in early stages of children’s L2 acquisition. Forty-eight native speakers of European Portuguese, all sixth graders (mean age= 10.87 years; SD= 0....

  3. Modulating the Focus of Attention for Spoken Words at Encoding Affects Frontoparietal Activation for Incidental Verbal Memory

    OpenAIRE

    Christensen, Thomas A.; Almryde, Kyle R.; Fidler, Lesley J.; Lockwood, Julie L.; Antonucci, Sharon M.; Plante, Elena

    2012-01-01

    Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as ...

  4. Engaging Minority Youth in Diabetes Prevention Efforts Through a Participatory, Spoken-Word Social Marketing Campaign.

    Science.gov (United States)

    Rogers, Elizabeth A; Fine, Sarah C; Handley, Margaret A; Davis, Hodari B; Kass, James; Schillinger, Dean

    2017-07-01

    To examine the reach, efficacy, and adoption of The Bigger Picture, a type 2 diabetes (T2DM) social marketing campaign that uses spoken-word public service announcements (PSAs) to teach youth about socioenvironmental conditions influencing T2DM risk. A nonexperimental pilot dissemination evaluation through high school assemblies and a Web-based platform were used. The study took place in San Francisco Bay Area high schools during 2013. In the study, 885 students were sampled from 13 high schools. A 1-hour assembly provided data, poet performances, video PSAs, and Web-based platform information. A Web-based platform featured the campaign Web site and social media. Student surveys preassembly and postassembly (knowledge, attitudes), assembly observations, school demographics, counts of Web-based utilization, and adoption were measured. Descriptive statistics, McNemar's χ 2 test, and mixed modeling accounting for clustering were used to analyze data. The campaign included 23 youth poet-created PSAs. It reached >2400 students (93% self-identified non-white) through school assemblies and has garnered >1,000,000 views of Web-based video PSAs. School participants demonstrated increased short-term knowledge of T2DM as preventable, with risk driven by socioenvironmental factors (34% preassembly identified environmental causes as influencing T2DM risk compared to 83% postassembly), and perceived greater personal salience of T2DM risk reduction (p < .001 for all). The campaign has been adopted by regional public health departments. The Bigger Picture campaign showed its potential for reaching and engaging diverse youth. Campaign messaging is being adopted by stakeholders.

  5. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    Science.gov (United States)

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Effects of prosody on spoken Thai word perception in pre-attentive brain processing: a pilot study

    Directory of Open Access Journals (Sweden)

    Kittipun Arunphalungsanti

    2016-12-01

    Full Text Available This study aimed to investigate the effect of the unfamiliar stressed prosody on spoken Thai word perception in the pre-attentive processing of the brain evaluated by the N2a and brain wave oscillatory activity. EEG recording was obtained from eleven participants, who were instructed to ignore the sound stimuli while watching silent movies. Results showed that prosody of unfamiliar stress word perception elicited N2a component and the quantitative EEG analysis found that theta and delta wave powers were principally generated in the frontal area. It was possible that the unfamiliar prosody with different frequencies, duration and intensity of the sound of Thai words induced highly selective attention and retrieval of information from the episodic memory of the pre-attentive stage of speech perception. This brain electrical activity evidence could be used for further study in the development of valuable clinical tests to evaluate the frontal lobe function in speech perception.

  8. The Role of SES in Chinese (L1) and English (L2) Word Reading in Chinese-Speaking Kindergarteners

    Science.gov (United States)

    Liu, Duo; Chung, Kevin K. H.; McBride, Catherine

    2016-01-01

    The present study investigated the relationships between socioeconomic status (SES) and word reading in both Chinese (L1) and English (L2), with children's cognitive/linguistic skills considered as mediators and/or moderators. One hundred ninety-nine Chinese kindergarteners in Hong Kong with diverse SES backgrounds participated in this study. SES…

  9. Computer-Mediated Input, Output and Feedback in the Development of L2 Word Recognition from Speech

    Science.gov (United States)

    Matthews, Joshua; Cheng, Junyu; O'Toole, John Mitchell

    2015-01-01

    This paper reports on the impact of computer-mediated input, output and feedback on the development of second language (L2) word recognition from speech (WRS). A quasi-experimental pre-test/treatment/post-test research design was used involving three intact tertiary level English as a Second Language (ESL) classes. Classes were either assigned to…

  10. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  11. Is a Picture Worth a Thousand Words? Using Images to Create a Concreteness Effect for Abstract Words: Evidence from Beginning L2 Learners of Spanish

    Science.gov (United States)

    Farley, Andrew; Pahom, Olga; Ramonda, Kris

    2014-01-01

    This study examines the lexical representation and recall of abstract words by beginning L2 learners of Spanish in the light of the predictions of the dual coding theory (Paivio 1971; Paivio and Desrochers 1980). Ninety-seven learners (forty-four males and fifty-three females) were randomly placed in the picture or non-picture group and taught…

  12. The Influence of the Phonological Neighborhood Clustering Coefficient on Spoken Word Recognition

    Science.gov (United States)

    Chan, Kit Ying; Vitevitch, Michael S.

    2009-01-01

    Clustering coefficient--a measure derived from the new science of networks--refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words "bat", "hat", and "can", all of which are neighbors of the word "cat"; the words "bat" and…

  13. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    Science.gov (United States)

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  14. Acquiring L2 Sentence Comprehension: A Longitudinal Study of Word Monitoring in Noise

    Science.gov (United States)

    Oliver, Georgina; Gullberg, Marianne; Hellwig, Frauke; Mitterer, Holger; Indefrey, Peter

    2012-01-01

    This study investigated the development of second language online auditory processing with ab initio German learners of Dutch. We assessed the influence of different levels of background noise and different levels of semantic and syntactic target word predictability on word-monitoring latencies. There was evidence of syntactic, but not…

  15. Stimulus variability and the phonetic relevance hypothesis: effects of variability in speaking style, fundamental frequency, and speaking rate on spoken word identification.

    Science.gov (United States)

    Sommers, Mitchell S; Barcroft, Joe

    2006-04-01

    Three experiments were conducted to examine the effects of trial-to-trial variations in speaking style, fundamental frequency, and speaking rate on identification of spoken words. In addition, the experiments investigated whether any effects of stimulus variability would be modulated by phonetic confusability (i.e., lexical difficulty). In Experiment 1, trial-to-trial variations in speaking style reduced the overall identification performance compared with conditions containing no speaking-style variability. In addition, the effects of variability were greater for phonetically confusable words than for phonetically distinct words. In Experiment 2, variations in fundamental frequency were found to have no significant effects on spoken word identification and did not interact with lexical difficulty. In Experiment 3, two different methods for varying speaking rate were found to have equivalent negative effects on spoken word recognition and similar interactions with lexical difficulty. Overall, the findings are consistent with a phonetic-relevance hypothesis, in which accommodating sources of acoustic-phonetic variability that affect phonetically relevant properties of speech signals can impair spoken word identification. In contrast, variability in parameters of the speech signal that do not affect phonetically relevant properties are not expected to affect overall identification performance. Implications of these findings for the nature and development of lexical representations are discussed.

  16. THE INFLUENCE OF SYLLABIFICATION RULES IN L1 ON L2 WORD RECOGNITION.

    Science.gov (United States)

    Choi, Wonil; Nam, Kichun; Lee, Yoonhyoung

    2015-10-01

    Experiments with Korean learners of English and English monolinguals were conducted to examine whether knowledge of syllabification in the native language (Korean) affects the recognition of printed words in the non-native language (English). Another purpose of this study was to test whether syllables are the processing unit in Korean visual word recognition. In Experiment 1, 26 native Korean speakers and 19 native English speakers participated. In Experiment 2, 40 native Korean speakers participated. In two experiments, syllable length was manipulated based on the Korean syllabification rule and the participants performed a lexical decision task. Analyses of variance were performed for the lexical decision latencies and error rates in two experiments. The results from Korean learners of English showed that two-syllable words based on the Korean syllabification rule were recognized faster as words than various types of three-syllable words, suggesting that Korean learners of English exploited their L1 phonological knowledge in recognizing English words. The results of the current study also support the idea that syllables are a processing unit of Korean visual word recognition.

  17. Modulating the Focus of Attention for Spoken Words at Encoding Affects Frontoparietal Activation for Incidental Verbal Memory

    Directory of Open Access Journals (Sweden)

    Thomas A. Christensen

    2012-01-01

    Full Text Available Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as listeners heard a list of single English nouns. We then presented these words again in the context of a recognition task and assessed the effect of modulating attention at encoding on the BOLD responses to words that were either attended strongly, weakly, or not heard previously. MRI revealed activity in right-lateralized inferior parietal and prefrontal regions, and positive BOLD signals varied with the relative level of attention present at encoding. Temporal analysis of hemodynamic responses further showed that the time course of BOLD activity was modulated differentially by unintentionally encoded words compared to novel items. Our findings largely support current models of memory consolidation and retrieval, but they also provide fresh evidence for hemispheric differences and functional subdivisions in right frontoparietal attention networks that help shape auditory episodic recall.

  18. Modulating the focus of attention for spoken words at encoding affects frontoparietal activation for incidental verbal memory.

    Science.gov (United States)

    Christensen, Thomas A; Almryde, Kyle R; Fidler, Lesley J; Lockwood, Julie L; Antonucci, Sharon M; Plante, Elena

    2012-01-01

    Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as listeners heard a list of single English nouns. We then presented these words again in the context of a recognition task and assessed the effect of modulating attention at encoding on the BOLD responses to words that were either attended strongly, weakly, or not heard previously. MRI revealed activity in right-lateralized inferior parietal and prefrontal regions, and positive BOLD signals varied with the relative level of attention present at encoding. Temporal analysis of hemodynamic responses further showed that the time course of BOLD activity was modulated differentially by unintentionally encoded words compared to novel items. Our findings largely support current models of memory consolidation and retrieval, but they also provide fresh evidence for hemispheric differences and functional subdivisions in right frontoparietal attention networks that help shape auditory episodic recall.

  19. A dual contribution to the involuntary semantic processing of unexpected spoken words.

    Science.gov (United States)

    Parmentier, Fabrice B R; Turner, Jacqueline; Perez, Laura

    2014-02-01

    Sounds are a major cause of distraction. Unexpected to-be-ignored auditory stimuli presented in the context of an otherwise repetitive acoustic background ineluctably break through selective attention and distract people from an unrelated visual task (deviance distraction). This involuntary capture of attention by deviant sounds has been hypothesized to trigger their semantic appraisal and, in some circumstances, interfere with ongoing performance, but it remains unclear how such processing compares with the automatic processing of distractors in classic interference tasks (e.g., Stroop, flanker, Simon tasks). Using a cross-modal oddball task, we assessed the involuntary semantic processing of deviant sounds in the presence and absence of deviance distraction. The results revealed that some involuntary semantic analysis of spoken distractors occurs in the absence of deviance distraction but that this processing is significantly greater in its presence. We conclude that the automatic processing of spoken distractors reflects 2 contributions, one that is contingent upon deviance distraction and one that is independent from it.

  20. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words

    NARCIS (Netherlands)

    Takashima, A.; Bakker, I.; Hell, J.G. van; Janzen, G.; McQueen, J.M.

    2017-01-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and

  1. Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children

    Directory of Open Access Journals (Sweden)

    Mélanie Havy

    2017-12-01

    Full Text Available From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face. Another group experienced the words in the visual modality (seeing a silent talking face. Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.

  2. A connectionist model for the simulation of human spoken-word recognition

    NARCIS (Netherlands)

    Kuijk, D.J. van; Wittenburg, P.; Dijkstra, A.F.J.; Den Brinker, B.P.L.M.; Beek, P.J.; Brand, A.N.; Maarse, F.J.; Mulder, L.J.M.

    1999-01-01

    A new psycholinguistically motivated and neural network base model of human word recognition is presented. In contrast to earlier models it uses real speech as input. At the word layer acoustical and temporal information is stored by sequences of connected sensory neurons that pass on sensor

  3. Comparing L2 Word Learning through a Tablet or Real Objects: What Benefits Learning Most?

    NARCIS (Netherlands)

    Vlaar, M.A.J.; Verhagen, J.; Oudgenoeg-Paz, O.; Leseman, P.P.M.

    2017-01-01

    In child-robot interactions focused on language learning, tablets are often used to structure the interaction between the robot and the child. However, it is not clear how tablets affect children’s learning gains. Real-life objects are thought to benefit children’s word learning, but it is not clear

  4. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words.

    Science.gov (United States)

    Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M

    2017-04-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Tracing attention and the activation flow in spoken word planning using eye-movements

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements

  6. Distinct Patterns of Brain Activity Characterise Lexical Activation and Competition in Spoken Word Production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Jensen, O.; Schoffelen, J.M.; Bonnefond, M.

    2014-01-01

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography

  7. Distinct patterns of brain activity characterise lexical activation and competition in spoken word production.

    Directory of Open Access Journals (Sweden)

    Vitória Piai

    Full Text Available According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog with distractor words. The distractor and picture name were semantically related (cat, unrelated (pin, or identical (dog. Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350-650 ms (4-10 Hz in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.

  8. Visual information constrains early and late stages of spoken-word recognition in sentence context.

    Science.gov (United States)

    Brunellière, Angèle; Sánchez-García, Carolina; Ikumi, Nara; Soto-Faraco, Salvador

    2013-07-01

    Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Grasp it loudly! Supporting actions with semantically congruent spoken action words.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I. Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II. For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action ("grasp" and "put down" or unrelated words (e.g. "butterfly" and "pigeon". Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action language and motor structures are significant to the point that language can refine overt motor behaviour.

  10. Tracing Attention and the Activation Flow of Spoken Word Planning Using Eye Movements

    Science.gov (United States)

    Roelofs, Ardi

    2008-01-01

    The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The…

  11. Children show right-lateralized effects of spoken word-form learning.

    Directory of Open Access Journals (Sweden)

    Anni Nora

    Full Text Available It is commonly thought that phonological learning is different in young children compared to adults, possibly due to the speech processing system not yet having reached full native-language specialization. However, the neurocognitive mechanisms of phonological learning in children are poorly understood. We employed magnetoencephalography (MEG to track cortical correlates of incidental learning of meaningless word forms over two days as 6-8-year-olds overtly repeated them. Native (Finnish pseudowords were compared with words of foreign sound structure (Korean to investigate whether the cortical learning effects would be more dependent on previous proficiency in the language rather than maturational factors. Half of the items were encountered four times on the first day and once more on the following day. Incidental learning of these recurring word forms manifested as improved repetition accuracy and a correlated reduction of activation in the right superior temporal cortex, similarly for both languages and on both experimental days, and in contrast to a salient left-hemisphere emphasis previously reported in adults. We propose that children, when learning new word forms in either native or foreign language, are not yet constrained by left-hemispheric segmental processing and established sublexical native-language representations. Instead, they may rely more on supra-segmental contours and prosody.

  12. Lexical Tone Variation and Spoken Word Recognition in Preschool Children: Effects of Perceptual Salience

    Science.gov (United States)

    Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.

    2017-01-01

    Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…

  13. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    NARCIS (Netherlands)

    Jesse, A.; McQueen, J.M.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes

  14. Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared. and they manually responded to arrows presented away from (Experiment

  15. Two-year-olds' sensitivity to subphonemic mismatch during online spoken word recognition.

    Science.gov (United States)

    Paquette-Smith, Melissa; Fecher, Natalie; Johnson, Elizabeth K

    2016-11-01

    Sensitivity to noncontrastive subphonemic detail plays an important role in adult speech processing, but little is known about children's use of this information during online word recognition. In two eye-tracking experiments, we investigate 2-year-olds' sensitivity to a specific type of subphonemic detail: coarticulatory mismatch. In Experiment 1, toddlers viewed images of familiar objects (e.g., a boat and a book) while hearing labels containing appropriate or inappropriate coarticulation. Inappropriate coarticulation was created by cross-splicing the coda of the target word onto the onset of another word that shared the same onset and nucleus (e.g., to create boat, the final consonant of boat was cross-spliced onto the initial CV of bone). We tested 24-month-olds and 29-month-olds in this paradigm. Both age groups behaved similarly, readily detecting the inappropriate coarticulation (i.e., showing better recognition of identity-spliced than cross-spliced items). In Experiment 2, we asked how children's sensitivity to subphonemic mismatch compared to their sensitivity to phonemic mismatch. Twenty-nine-month-olds were presented with targets that contained either a phonemic (e.g., the final consonant of boat was spliced onto the initial CV of bait) or a subphonemic mismatch (e.g., the final consonant of boat was spliced onto the initial CV of bone). Here, the subphonemic (coarticulatory) mismatch was not nearly as disruptive to children's word recognition as a phonemic mismatch. Taken together, our findings support the view that 2-year-olds, like adults, use subphonemic information to optimize online word recognition.

  16. From the Spoken Word to Video: Orality, Literacy, Mediated Orality, and the Amazigh (Berber Cultural Production

    Directory of Open Access Journals (Sweden)

    Daniela Merolla

    2005-08-01

    Full Text Available This article presents new directions in Tamazight/Berber artistic productions. The development of theatre, films and videos in Tamazight are set in the framework of the historical and literary background in the Maghreb and in the lands of Amazigh Diaspora.  It also includes the interview with the video-maker and director Agouram Salout. Key Words: tamazight, berber, theatre, videos, film, taqbaylit, tarifit, tachelhit, new cultural production, writing, orality

  17. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    Directory of Open Access Journals (Sweden)

    Vitoria ePiai

    2013-12-01

    Full Text Available Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI; vocal colour naming while ignoring distractors (Stroop; and manual object discrimination while ignoring spatial position (Simon task. All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus. Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category relative to incongruent (categorically related and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the anterior cingulate cortex, a region that is likely implementing domain

  18. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  19. Vocabulary Learning in a Yorkshire Terrier: Slow Mapping of Spoken Words

    Science.gov (United States)

    Griebel, Ulrike; Oller, D. Kimbrough

    2012-01-01

    Rapid vocabulary learning in children has been attributed to “fast mapping”, with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico) not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey) with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before) embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion. PMID:22363421

  20. Vocabulary learning in a Yorkshire terrier: slow mapping of spoken words.

    Directory of Open Access Journals (Sweden)

    Ulrike Griebel

    Full Text Available Rapid vocabulary learning in children has been attributed to "fast mapping", with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion.

  1. The role of visual representations within working memory for paired-associate and serial order of spoken words.

    Science.gov (United States)

    Ueno, Taiji; Saito, Satoru

    2013-09-01

    Caplan and colleagues have recently explained paired-associate learning and serial-order learning with a single-mechanism computational model by assuming differential degrees of isolation. Specifically, two items in a pair can be grouped together and associated to positional codes that are somewhat isolated from the rest of the items. In contrast, the degree of isolation among the studied items is lower in serial-order learning. One of the key predictions drawn from this theory is that any variables that help chunking of two adjacent items into a group should be beneficial to paired-associate learning, more than serial-order learning. To test this idea, the role of visual representations in memory for spoken verbal materials (i.e., imagery) was compared between two types of learning directly. Experiment 1 showed stronger effects of word concreteness and of concurrent presentation of irrelevant visual stimuli (dynamic visual noise: DVN) in paired-associate memory than in serial-order memory, consistent with the prediction. Experiment 2 revealed that the irrelevant visual stimuli effect was boosted when the participants had to actively maintain the information within working memory, rather than feed it to long-term memory for subsequent recall, due to cue overloading. This indicates that the sensory input from irrelevant visual stimuli can reach and affect visual representations of verbal items within working memory, and that this disruption can be attenuated when the information within working memory can be efficiently supported by long-term memory for subsequent recall.

  2. Research note: exceptional absolute pitch perception for spoken words in an able adult with autism.

    Science.gov (United States)

    Heaton, Pamela; Davis, Robert E; Happé, Francesca G E

    2008-01-01

    Autism is a neurodevelopmental disorder, characterised by deficits in socialisation and communication, with repetitive and stereotyped behaviours [American Psychiatric Association (1994). Diagnostic and statistical manual for mental disorders (4th ed.). Washington, DC: APA]. Whilst intellectual and language impairment is observed in a significant proportion of diagnosed individuals [Gillberg, C., & Coleman, M. (2000). The biology of the autistic syndromes (3rd ed.). London: Mac Keith Press; Klinger, L., Dawson, G., & Renner, P. (2002). Autistic disorder. In E. Masn, & R. Barkley (Eds.), Child pyschopathology (2nd ed., pp. 409-454). New York: Guildford Press], the disorder is also strongly associated with the presence of highly developed, idiosyncratic, or savant skills [Heaton, P., & Wallace, G. (2004) Annotation: The savant syndrome. Journal of Child Psychology and Psychiatry, 45 (5), 899-911]. We tested identification of fundamental pitch frequencies in complex tones, sine tones and words in AC, an intellectually able man with autism and absolute pitch (AP) and a group of healthy controls with self-reported AP. The analysis showed that AC's naming of speech pitch was highly superior in comparison to controls. The results suggest that explicit access to perceptual information in speech is retained to a significantly higher degree in autism.

  3. Lexical and semantic representations in the acquisition of L2 cognate and non-cognate words: evidence from two learning methods in children.

    Science.gov (United States)

    Comesaña, Montserrat; Soares, Ana Paula; Sánchez-Casas, Rosa; Lima, Cátia

    2012-08-01

    How bilinguals represent words in two languages and which mechanisms are responsible for second language acquisition are important questions in the bilingual and vocabulary acquisition literature. This study aims to analyse the effect of two learning methods (picture- vs. word-based method) and two types of words (cognates and non-cognates) in early stages of children's L2 acquisition. Forty-eight native speakers of European Portuguese, all sixth graders (mean age = 10.87 years; SD= 0.85), participated in the study. None of them had prior knowledge of Basque (the L2 in this study). After a learning phase in which L2 words were learned either by a picture- or a word-based method, children were tested in a backward-word translation recognition task at two times (immediately vs. one week later). Results showed that the participants made more errors when rejecting semantically related than semantically unrelated words as correct translations (semantic interference effect). The magnitude of this effect was higher in the delayed test condition regardless of the learning method. Moreover, the overall performance of participants from the word-based method was better than the performance of participants from the picture-word method. Results were discussed concerning the most significant bilingual lexical processing models. ©2011 The British Psychological Society.

  4. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  5. Translation norms for English and Spanish: The role of lexical variables, word class, and L2 proficiency in negotiating translation ambiguity

    Science.gov (United States)

    Prior, Anat; MacWhinney, Brian; Kroll, Judith F.

    2014-01-01

    We present a set of translation norms for 670 English and 760 Spanish nouns, verbs and class ambiguous items that varied in their lexical properties in both languages, collected from 80 bilingual participants. Half of the words in each language received more than a single translation across participants. Cue word frequency and imageability were both negatively correlated with number of translations. Word class predicted number of translations: Nouns had fewer translations than did verbs, which had fewer translations than class-ambiguous items. The translation probability of specific responses was positively correlated with target word frequency and imageability, and with its form overlap with the cue word. Translation choice was modulated by L2 proficiency: Less proficient bilinguals tended to produce lower probability translations than more proficient bilinguals, but only in forward translation, from L1 to L2. These findings highlight the importance of translation ambiguity as a factor influencing bilingual representation and performance. The norms can also provide an important resource to assist researchers in the selection of experimental materials for studies of bilingual and monolingual language performance. These norms may be downloaded from www.psychonomic.org/archive. PMID:18183923

  6. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Individual Differences in L2 Processing of Multi-word Phrases: Effects of Working Memory and Personality

    NARCIS (Netherlands)

    Kerz, E.; Wiechmann, D.; Mitkov, R.

    2017-01-01

    There is an accumulating body of evidence that knowledge of the statistics of multiword phrases (MWP) facilitates native language learning and processing both in children and adults. However, less is known about whether adult second language (L2) learners are able to develop native-like sensitivity

  8. TISK 1.0: An easy-to-use Python implementation of the time-invariant string kernel model of spoken word recognition.

    Science.gov (United States)

    You, Heejo; Magnuson, James S

    2018-04-30

    This article describes a new Python distribution of TISK, the time-invariant string kernel model of spoken word recognition (Hannagan et al. in Frontiers in Psychology, 4, 563, 2013). TISK is an interactive-activation model similar to the TRACE model (McClelland & Elman in Cognitive Psychology, 18, 1-86, 1986), but TISK replaces most of TRACE's reduplicated, time-specific nodes with theoretically motivated time-invariant, open-diphone nodes. We discuss the utility of computational models as theory development tools, the relative merits of TISK as compared to other models, and the ways in which researchers might use this implementation to guide their own research and theory development. We describe a TISK model that includes features that facilitate in-line graphing of simulation results, integration with standard Python data formats, and graph and data export. The distribution can be downloaded from https://github.com/maglab-uconn/TISK1.0 .

  9. Auditory semantic processing in dichotic listening: effects of competing speech, ear of presentation, and sentential bias on N400s to spoken words in context.

    Science.gov (United States)

    Carey, Daniel; Mercure, Evelyne; Pizzioli, Fabrizio; Aydelott, Jennifer

    2014-12-01

    The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. An Interaction Between the Effects of Bilingualism and Cross-linguistic Similarity in Balanced and Unbalanced Bilingual Adults' L2 Mandarin Word-Reading Production.

    Science.gov (United States)

    Hsu, Hsiu-Ling

    2017-08-01

    We conducted three experiments investigating in more detail the interaction between the two effects of bilingualism and L1-L2 similarity in the speech performance of balanced and unbalanced bilinguals. In Experiment 1, L1 Mandarin monolinguals and two groups of Hakka and Minnan balanced bilinguals (Hakka: more similar to Mandarin) performed a non-contextual single-character reading task in Mandarin, which required more inhibitory control. The two bilingual groups outperformed the monolinguals, regardless of their L1 background. However, the bilingual advantage was not found in a contextual multi-word task (Experiment 2), but instead the effect of cross-linguistic similarity emerged. Furthermore, in Experiment 3, the Hakka unbalanced bilinguals showed an advantage in the non-contextual task, while their Minnan counterparts did not, and the impact of L1-L2 similarity emerged in both tasks. These results unveiled the way the two effects dynamically interplayed depending on the task contexts and the relative degrees of using L1 and L2.

  11. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  12. Recurrent Word Combinations in EAP Test-Taker Writing: Differences between High- and Low-Proficiency Levels

    Science.gov (United States)

    Appel, Randy; Wood, David

    2016-01-01

    The correct use of frequently occurring word combinations represents an important part of language proficiency in spoken and written discourse. This study investigates the use of English-language recurrent word combinations in low-level and high-level L2 English academic essays sourced from the Canadian Academic English Language (CAEL) assessment.…

  13. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    Science.gov (United States)

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  14. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  15. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use over Time

    Science.gov (United States)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    2013-01-01

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson--Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semi-fixed multi-word units (MWUs), which comprise fixed parts with the potential…

  16. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use Over Time

    NARCIS (Netherlands)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson-Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semifixed multi-word units (MWUs),

  17. The Concreteness Effect and the Bilingual Lexicon: The Impact of Visual Stimuli Attachment on Meaning Recall of Abstract L2 Words

    Science.gov (United States)

    Farley, Andrew P.; Ramonda, Kris; Liu, Xun

    2012-01-01

    According to the Dual-Coding Theory (Paivio & Desrochers, 1980), words that are associated with rich visual imagery are more easily learned than abstract words due to what is termed the concreteness effect (Altarriba & Bauer, 2004; de Groot, 1992, de Groot et al., 1994; ter Doest & Semin, 2005). The present study examined the effects of attaching…

  18. Words with Multiple Meanings in Authentic L2 Texts: An Analysis of "Harry Potter and the Philosopher's Stone"

    Science.gov (United States)

    Ozturk, Meral

    2017-01-01

    Dictionary studies have suggested that nearly half of the English lexicon have multiple meanings. It is not yet clear, however, if second language learners reading English texts will encounter words with multiple meanings to the same degree. This study investigates the use of words with multiple meanings in an authentic English novel. Two samples…

  19. Rethinking spoken fluency

    OpenAIRE

    McCarthy, Michael

    2009-01-01

    This article re-examines the notion of spoken fluency. Fluent and fluency are terms commonly used in everyday, lay language, and fluency, or lack of it, has social consequences. The article reviews the main approaches to understanding and measuring spoken fluency and suggest that spoken fluency is best understood as an interactive achievement, and offers the metaphor of ‘confluence’ to replace the term fluency. Many measures of spoken fluency are internal and monologue-based, whereas evidence...

  20. Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data.

    Science.gov (United States)

    Gow, David W; Olson, Bruna B

    2015-07-01

    Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.

  1. Attentional Capture of Objects Referred to by Spoken Language

    Science.gov (United States)

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  2. Processing spoken lectures in resource-scarce environments

    CSIR Research Space (South Africa)

    Van Heerden, CJ

    2011-11-01

    Full Text Available and then adapting or training new models using the segmented spoken lectures. The eventual systems perform quite well, aligning more than 90% of a selected set of target words successfully....

  3. Autosegmental Representation of Epenthesis in the Spoken French ...

    African Journals Online (AJOL)

    Nneka Umera-Okeke

    ... spoken French of IUFLs. Key words: IUFLs, Epenthensis, Ijebu dialect, Autosegmental phonology .... Ambiguities may result: salmi "strait" vs. salami. (An exception is that in .... tiers of segments. In the picture given us by classical generative.

  4. The influence of spelling on phonological encoding in word reading, object naming, and word generation

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2006-01-01

    Does the spelling of a word mandatorily constrain spoken word production, or does it do so only when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling effects in spoken word production in English using a prompt–response word generation task. Preparation

  5. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  6. Spoken Grammar for Chinese Learners

    Institute of Scientific and Technical Information of China (English)

    徐晓敏

    2013-01-01

    Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.

  7. Finding words in a language that allows words without vowels

    NARCIS (Netherlands)

    El Aissati, A.; McQueen, J.M.; Cutler, A.

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the

  8. PL2 production of english word-final consonants: the role of orthography and learner profile variables Produção de consoantes finais do inglês como L2: o papel da ortografia e de variáveis relacionadas ao perfil do aprendiz

    Directory of Open Access Journals (Sweden)

    Rosane Silveira

    2012-06-01

    Full Text Available The present study investigates some factors affecting the acquisition of second language (L2 phonology by learners with considerable exposure to the target language in an L2 context. More specifically, the purpose of the study is two-fold: (a to investigate the extent to which participants resort to phonological processes resulting from the transfer of L1 sound-spelling correspondence into the L2 when pronouncing English word-final consonants; and (b to examine the relationship between rate of transfer and learner profile factors, namely proficiency level, length of residence in the L2 country, age of arrival in the L2 country, education, chronological age, use of English with native speakers, attendance in EFL courses, and formal education. The investigation involved 31 Brazilian speakers living in the United States with diverse profiles. Data were collected using a questionnaire to elicit the participants' profiles, a sentence-reading test (pronunciation measure, and an oral picture-description test (L2 proficiency measure. The results indicate that even in an L2 context, the transfer of L1 sound-spelling correspondence to the production of L2 word-final consonants is frequent. The findings also reveal that extensive exposure to rich L2 input leads to the development of proficiency and improves production of L2 word-final consonants.O presente estudo examina fatores que afetam a produção de consoantes em segunda língua (L2 por aprendizes que foram consideravelmente expostos à língua-alvo em um contexto de L2. Um dos objetivos do presente estudo foi investigar com que frequência os participantes utilizam processos fonológicos que resultam da transferência da correspondência entre ortografia e som da língua materna (L1 para a L2, quando produzem consoantes da língua inglesa em posição de final de palavra. O segundo objetivo consistiu em examinar o relacionamento entre índice de transferência grafo-fonológica e fatores ligados ao

  9. Effects of Rhyme and Spelling Patterns on Auditory Word ERPs Depend on Selective Attention to Phonology

    Science.gov (United States)

    Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.

    2013-01-01

    ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…

  10. Finding words in a language that allows words without vowels.

    Science.gov (United States)

    El Aissati, Abder; McQueen, James M; Cutler, Anne

    2012-07-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the constraint would be counter-productive in certain languages that allow stand-alone vowelless open-class words. One such language is Berber (where t is indeed a word). Berber listeners here detected words affixed to nonsense contexts with or without vowels. Length effects seen in other languages replicated in Berber, but in contrast to prior findings, word detection was not hindered by vowelless contexts. When words can be vowelless, otherwise universal constraints disfavoring vowelless words do not feature in spoken-word recognition. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. "Now We Have Spoken."

    Science.gov (United States)

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  12. Effects of providing word sounds during printed word learning

    NARCIS (Netherlands)

    Reitsma, P.; Dongen, van A.J.N.; Custers, E.

    1984-01-01

    The purpose of this study was to explore the effects of the availability of the spoken sound of words along with the printed forms during reading practice. Firstgrade children from two normal elementary schools practised reading several unfamiliar words in print. For half of the printed words the

  13. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  14. Vocabulary Acquisition in L2: Does CALL Really Help?

    Science.gov (United States)

    Averianova, Irina

    2015-01-01

    Language competence in various communicative activities in L2 largely depends on the learners' size of vocabulary. The target vocabulary of adult L2 learners should be between 2,000 high frequency words (a critical threshold) and 10,000 word families (for comprehension of university texts). For a TOEIC test, the threshold is estimated to be…

  15. What does že jo (and že ne) mean in spoken dialogue

    Czech Academy of Sciences Publication Activity Database

    Komrsková, Zuzana

    2017-01-01

    Roč. 68, č. 2 (2017), s. 229-237 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : spoken languge * spoken corpus * tag question * responze word Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  16. Transfer of L1 Visual Word Recognition Strategies during Early Stages of L2 Learning: Evidence from Hebrew Learners Whose First Language Is Either Semitic or Indo-European

    Science.gov (United States)

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2016-01-01

    The present study examined visual word recognition processes in Hebrew (a Semitic language) among beginning learners whose first language (L1) was either Semitic (Arabic) or Indo-European (e.g. English). To examine if learners, like native Hebrew speakers, exhibit morphological sensitivity to root and word-pattern morphemes, learners made an…

  17. THE RECOGNITION OF SPOKEN MONO-MORPHEMIC COMPOUNDS IN CHINESE

    Directory of Open Access Journals (Sweden)

    Yu-da Lai

    2012-12-01

    Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.

  18. L2 Working Memory Capacity and L2 Reading Skill.

    Science.gov (United States)

    Harrington, Mike; Sawyer, Mark

    1992-01-01

    Examines the sensitivity of second-language (L2) working memory (ability to store and process information simultaneously) to differences in reading skills among advanced L2 learners. Subjects with larger L2 working memory capacities scored higher on measures of L2 reading skills, but no correlation was found between reading and passive short-term…

  19. Is spoken Danish less intelligible than Swedish?

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.

    2010-01-01

    The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is

  20. Word level language identification in online multilingual communication

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Dogruoz, A. Seza

    2013-01-01

    Multilingual speakers switch between languages in online and spoken communication. Analyses of large scale multilingual data require automatic language identification at the word level. For our experiments with multilingual online discussions, we first tag the language of individual words using

  1. The Relationship between Three Measures of L2 Vocabulary Knowledge and L2 Listening and Reading

    Science.gov (United States)

    Cheng, Junyu; Matthews, Joshua

    2018-01-01

    This study explores the constructs that underpin three different measures of vocabulary knowledge and investigates the degree to which these three measures correlate with, and are able to predict, measures of second language (L2) listening and reading. Word frequency structured vocabulary tests tapping "receptive/orthographic (RecOrth)…

  2. Word Order Acquisition in Persian Speaking Children

    Directory of Open Access Journals (Sweden)

    Nahid Jalilevand

    2017-06-01

    Discussion: Despite the fact that the spoken-Persian language has no strict word order, Persian-speaking children tend to use other logically possible orders of subject (S, verb (V, and object (O lesser than the SOV structure.

  3. Auditory word recognition is not more sensitive to word-initial than to word-final stimulus information

    NARCIS (Netherlands)

    Vlugt, van der M.J.; Nooteboom, S.G.

    1986-01-01

    Several accounts of human recognition of spoken words a.!!llign special importance to stimulus-word onsets. The experiment described here was d~igned to find out whether such a word-beginning superiority effect, which ill supported by experimental evidence of various kinds, is due to a special

  4. Lexical representation of novel L2 contrasts

    Science.gov (United States)

    Hayes-Harb, Rachel; Masuda, Kyoko

    2005-04-01

    There is much interest among psychologists and linguists in the influence of the native language sound system on the acquisition of second languages (Best, 1995; Flege, 1995). Most studies of second language (L2) speech focus on how learners perceive and produce L2 sounds, but we know of only two that have considered how novel sound contrasts are encoded in learners' lexical representations of L2 words (Pallier et al., 2001; Ota et al., 2002). In this study we investigated how native speakers of English encode Japanese consonant quantity contrasts in their developing Japanese lexicons at different stages of acquisition (Japanese contrasts singleton versus geminate consonants but English does not). Monolingual English speakers, native English speakers learning Japanese for one year, and native speakers of Japanese were taught a set of Japanese nonwords containing singleton and geminate consonants. Subjects then performed memory tasks eliciting perception and production data to determine whether they encoded the Japanese consonant quantity contrast lexically. Overall accuracy in these tasks was a function of Japanese language experience, and acoustic analysis of the production data revealed non-native-like patterns of differentiation of singleton and geminate consonants among the L2 learners of Japanese. Implications for theories of L2 speech are discussed.

  5. Inferring Speaker Affect in Spoken Natural Language Communication

    OpenAIRE

    Pon-Barry, Heather Roberta

    2012-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, ...

  6. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  7. The employment of a spoken language computer applied to an air traffic control task.

    Science.gov (United States)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  8. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  9. Words, Words, Words: English, Vocabulary.

    Science.gov (United States)

    Lamb, Barbara

    The Quinmester course on words gives the student the opportunity to increase his proficiency by investigating word origins, word histories, morphology, and phonology. The course includes the following: dictionary skills and familiarity with the "Oxford,""Webster's Third," and "American Heritage" dictionaries; word…

  10. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  11. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  12. "Daddy, Where Did the Words Go?" How Teachers Can Help Emergent Readers Develop a Concept of Word in Text

    Science.gov (United States)

    Flanigan, Kevin

    2006-01-01

    This article focuses on a concept that has rarely been studied in beginning reading research--a child's concept of word in text. Recent examinations of this phenomenon suggest that a child's ability to match spoken words to written words while reading--a concept of word in text--plays a pivotal role in early reading development. In this article,…

  13. The Study of Synonymous Word "Mistake"

    OpenAIRE

    Suwardi, Albertus

    2016-01-01

    This article discusses the synonymous word "mistake*.The discussion will also cover the meaning of 'word' itself. Words can be considered as form whether spoken or written, or alternatively as composite expression, which combine and meaning. Synonymous are different phonological words which have the same or very similar meanings. The synonyms of mistake are error, fault, blunder, slip, slipup, gaffe and inaccuracy. The data is taken from a computer program. The procedure of data collection is...

  14. Age of acquisition and word frequency in written picture naming.

    Science.gov (United States)

    Bonin, P; Fayol, M; Chalard, M

    2001-05-01

    This study investigates age of acquisition (AoA) and word frequency effects in both spoken and written picture naming. In the first two experiments, reliable AoA effects on object naming speed, with objective word frequency controlled for, were found in both spoken (Experiment 1) and written picture naming (Experiment 2). In contrast, no reliable objective word frequency effects were observed on naming speed, with AoA controlled for, in either spoken (Experiment 3) or written (Experiment 4) picture naming. The implications of the findings for written picture naming are briefly discussed.

  15. Word order in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.

    2012-01-01

    The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple

  16. The word-length effect and disyllabic words.

    Science.gov (United States)

    Lovatt, P; Avons, S E; Masterson, J

    2000-02-01

    Three experiments compared immediate serial recall of disyllabic words that differed on spoken duration. Two sets of long- and short-duration words were selected, in each case maximizing duration differences but matching for frequency, familiarity, phonological similarity, and number of phonemes, and controlling for semantic associations. Serial recall measures were obtained using auditory and visual presentation and spoken and picture-pointing recall. In Experiments 1a and 1b, using the first set of items, long words were better recalled than short words. In Experiments 2a and 2b, using the second set of items, no difference was found between long and short disyllabic words. Experiment 3 confirmed the large advantage for short-duration words in the word set originally selected by Baddeley, Thomson, and Buchanan (1975). These findings suggest that there is no reliable advantage for short-duration disyllables in span tasks, and that previous accounts of a word-length effect in disyllables are based on accidental differences between list items. The failure to find an effect of word duration casts doubt on theories that propose that the capacity of memory span is determined by the duration of list items or the decay rate of phonological information in short-term memory.

  17. Dispersion and Frequency: Is There Any Difference as Regards Their Relation to L2 Vocabulary Gains?

    Science.gov (United States)

    Alcaraz-Mármol, Gema

    2015-01-01

    Despite the current importance given to L2 vocabulary acquisition in the last two decades, considerable deficiencies are found in L2 students' vocabulary size. One of the aspects that may influence vocabulary learning is word frequency. However, scholars warn that frequency may lead to wrong conclusions if the way words are distributed is ignored.…

  18. L’ACQUISIZIONE DEI SEGNALI DISCORSIVI IN ITALIANO L2

    Directory of Open Access Journals (Sweden)

    Elisabetta Jafrancesco

    2015-07-01

    Full Text Available Il contributo presenta i risultati di una ricerca sull’acquisizione dei segnali discorsivi (SD in italiano L2, condotta su un corpus di parlato di studenti universitari in mobilità accademica, presenti nell’università di Firenze e inseriti in percorsi formativi di italiano L2 presso il Centro Linguistico di Ateneo. Il principale obiettivo dello studio è indagare l’italiano di stranieri relativamente a questo tratto specifico al fine di individuare eventuali sequenze acquisizionali, contribuendo a delineare lo sviluppo della competenza sociopragmatica degli apprendenti nei livelli di competenza proposti nel Quadro comune europeo di riferimento per le lingue (QCER Si analizza, in particolare, l’uso dei SD nel parlato dialogico degli informanti nei Livelli basico, indipendente e competente del QCER – con riferimento al modello tassonomico dei SD proposto da Bazzanella nella Grande grammatica italiana di consultazione (1995, integrato, in relazione allo specifico contesto, con nuove funzioni – evidenziando i principali macrofenomeni emersi, con l’intento inoltre di riflettere su come i dati dell’acquisizione dell’italiano L2 possono rappresentare un punto di riferimento per la definizione di percorsi formativi coerenti con i processi naturali di sviluppo della competenza.Acquisition of discourse markers in Italian L2The paper presents the results of a study on the acquisition of discourse markers (SD in Italian L2 students, conducted on a corpus of spoken language by university students at the University of Florence who attended Italian L2 language course at the University Language Center. The main objective of the study was to investigate the Italian of foreigners in relation to this specific trait in order to identify possible acquisitional sequences, helping shape the development of learners' socio-pragmatic competence levels proposed in the Common European Framework of Reference for Languages (CEFR. In particular, we

  19. Learning L2 German vocabulary through reading: the effect of three enhancement techniques compared

    NARCIS (Netherlands)

    Peters, E.; Hulstijn, J.H.; Sercu, L.; Lutjeharms, M.

    2009-01-01

    This study investigated three techniques designed to increase the chances that second language (L2) readers look up and learn unfamiliar words during and after reading an L2 text. Participants in the study, 137 college students in Belgium (L1 = Dutch, L2 = German), were randomly assigned to one of

  20. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  1. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  2. The Impact of Topic Interest, L2 Proficiency, and Gender on EFL Incidental Vocabulary Acquisition through Reading

    Science.gov (United States)

    Lee, Sunjung; Pulido, Diana

    2017-01-01

    This study investigated the impact of topic interest, alongside L2 proficiency and gender, on L2 vocabulary acquisition through reading. A repeated-measures design was used with 135 Korean EFL students. Control variables included topic familiarity, prior target-word knowledge, and target-word difficulty (word length, class, and concreteness).…

  3. Why not model spoken word recognition instead of phoneme monitoring?

    NARCIS (Netherlands)

    Vroomen, J.; de Gelder, B.

    2000-01-01

    Norris, McQueen & Cutler present a detailed account of the decision stage of the phoneme monitoring task. However, we question whether this contributes to our understanding of the speech recognition process itself, and we fail to see why phonotactic knowledge is playing a role in phoneme

  4. Many a true word spoken in jest : Visuele voorstellingspraktyke in ...

    African Journals Online (AJOL)

    Although humour conceals the cultural exclusion in the data set, the cultural codes in the visual material generalise the non-Western 'other' as either extremely religious or as fundamentally different. Key terms: hegemony, Flemish language textbook, Critical Discourse, Analysis, focus group discussion, representational ...

  5. L2 listening at work

    DEFF Research Database (Denmark)

    Øhrstrøm, Charlotte

    This dissertation on adult second language (L2) learning investigates individual learners’ experiences with listening in Danish as an L2 in everyday situations at work. More specifically, the study explores when international employees, who work at international companies in Denmark with English...... as a corporate language, listen in Danish at work, how they handle these situations, what problems they experience, and why some situations are more difficult to listen in than others. The study makes use of qualitative research methods and theoretical aspects from psycholinguistic approaches as well as socially...

  6. Using Key Part-of-Speech Analysis to Examine Spoken Discourse by Taiwanese EFL Learners

    Science.gov (United States)

    Lin, Yen-Liang

    2015-01-01

    This study reports on a corpus analysis of samples of spoken discourse between a group of British and Taiwanese adolescents, with the aim of exploring the statistically significant differences in the use of grammatical categories between the two groups of participants. The key word method extended to a part-of-speech level using the web-based…

  7. Cohesion as interaction in ELF spoken discourse

    Directory of Open Access Journals (Sweden)

    T. Christiansen

    2013-10-01

    Full Text Available Hitherto, most research into cohesion has concentrated on texts (usually written only in standard Native Speaker English – e.g. Halliday and Hasan (1976. By contrast, following on the work in anaphora of such scholars as Reinhart (1983 and Cornish (1999, Christiansen (2011 describes cohesion as an interac­tive process focusing on the link between text cohesion and discourse coherence. Such a consideration of cohesion from the perspective of discourse (i.e. the process of which text is the product -- Widdowson 1984, p. 100 is especially relevant within a lingua franca context as the issue of different variations of ELF and inter-cultural concerns (Guido 2008 add extra dimensions to the complex multi-code interaction. In this case study, six extracts of transcripts (approximately 1000 words each, taken from the VOICE corpus (2011 of conference question and answer sessions (spoken interaction set in multicultural university con­texts are analysed in depth by means of a qualitative method.

  8. Task type and incidental L2 vocabulary learning: Repetition versus ...

    African Journals Online (AJOL)

    This study investigated the effect of task type on incidental L2 vocabulary learning. The different tasks investigated in this study differed in terms of repetition of encounters and task involvement load. In a within-subjects design, 72 Iranian learners of English practised 18 target words in three exercise conditions: three ...

  9. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    Science.gov (United States)

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  10. Does Hearing Several Speakers Reduce Foreign Word Learning?

    Science.gov (United States)

    Ludington, Jason Darryl

    2016-01-01

    Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…

  11. Neural changes underlying early stages of L2 vocabulary acquisition.

    Science.gov (United States)

    Pu, He; Holcomb, Phillip J; Midgley, Katherine J

    2016-11-01

    Research has shown neural changes following second language (L2) acquisition after weeks or months of instruction. But are such changes detectable even earlier than previously shown? The present study examines the electrophysiological changes underlying the earliest stages of second language vocabulary acquisition by recording event-related potentials (ERPs) within the first week of learning. Adult native English speakers with no previous Spanish experience completed less than four hours of Spanish vocabulary training, with pre- and post-training ERPs recorded to a backward translation task. Results indicate that beginning L2 learners show rapid neural changes following learning, manifested in changes to the N400 - an ERP component sensitive to lexicosemantic processing and degree of L2 proficiency. Specifically, learners in early stages of L2 acquisition show growth in N400 amplitude to L2 words following learning as well as a backward translation N400 priming effect that was absent pre-training. These results were shown within days of minimal L2 training, suggesting that the neural changes captured during adult second language acquisition are more rapid than previously shown. Such findings are consistent with models of early stages of bilingualism in adult learners of L2 ( e.g. Kroll and Stewart's RHM) and reinforce the use of ERP measures to assess L2 learning.

  12. Is there pain in champagne? Semantic involvement of words within words during sense-making

    NARCIS (Netherlands)

    van Alphen, P.M.; van Berkum, J.J.A.

    2010-01-01

    In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e.g., pie in pirate) or at their offsets (e.g., pain in champagne). In the experiment, Dutch

  13. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    Science.gov (United States)

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  14. L2 Vocabulary Acquisition in Children: Effects of Learning Method and Cognate Status

    Science.gov (United States)

    Tonzar, Claudio; Lotto, Lorella; Job, Remo

    2009-01-01

    In this study we investigated the effects of two learning methods (picture- or word-mediated learning) and of word status (cognates vs. noncognates) on the vocabulary acquisition of two foreign languages: English and German. We examined children from fourth and eighth grades in a school setting. After a learning phase during which L2 words were…

  15. Voice congruency facilitates word recognition.

    Directory of Open Access Journals (Sweden)

    Sandra Campeanu

    Full Text Available Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  16. Voice congruency facilitates word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2013-01-01

    Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  17. The Mother Tongue in the Foreign Language: An Account of Russian L2 Learners' Error Incidence on Output

    Science.gov (United States)

    Forteza Fernandez, Rafael Filiberto; Korneeva, Larisa I.

    2017-01-01

    Based on Selinker's hypothesis of five psycholinguistic processes shaping interlanguage (1972), the paper focuses attention on the Russian L2-learners' overreliance on the L1 as the main factor hindering their development. The research problem is, therefore, the high incidence of L1 transfer in the spoken and written English language output of…

  18. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Social interaction facilitates word learning in preverbal infants: Word-object mapping and word segmentation.

    Science.gov (United States)

    Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo

    2017-08-01

    In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Hearing taboo words can result in early talker effects in word recognition for female listeners.

    Science.gov (United States)

    Tuft, Samantha E; MᶜLennan, Conor T; Krestar, Maura L

    2018-02-01

    Previous spoken word recognition research using the long-term repetition-priming paradigm found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks, and the identity of the talker changed reaction times (RTs) were slower than when the repeated words were spoken by the same talker. Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research suggests that increased explicit and implicit attention towards the talkers can result in talker effects even during relatively fast processing. The purpose of the current study was to examine whether word meaning would influence the pattern of talker effects in an easy lexical decision task and, if so, whether results would differ depending on whether the presentation of neutral and taboo words was mixed or blocked. Regardless of presentation, participants responded to taboo words faster than neutral words. Furthermore, talker effects for the female talker emerged when participants heard both taboo and neutral words (consistent with an attention-based hypothesis), but not for participants that heard only taboo or only neutral words (consistent with the time-course hypothesis). These findings have important implications for theoretical models of spoken word recognition.

  1. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    Science.gov (United States)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  2. Some words on Word

    NARCIS (Netherlands)

    Janssen, Maarten; Visser, A.

    In many disciplines, the notion of a word is of central importance. For instance, morphology studies le mot comme tel, pris isol´ement (Mel’ˇcuk, 1993 [74]). In the philosophy of language the word was often considered to be the primary bearer of meaning. Lexicography has as its fundamental role

  3. A Descriptive Study of Registers Found in Spoken and Written Communication (A Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Nurul Hidayah

    2016-07-01

    Full Text Available This research is descriptive study of registers found in spoken and written communication. The type of this research is Descriptive Qualitative Research. In this research, the data of the study is register in spoken and written communication that are found in a book entitled "Communicating! Theory and Practice" and from internet. The data can be in the forms of words, phrases and abbreviation. In relation with method of collection data, the writer uses the library method as her instrument. The writer relates it to the study of register in spoken and written communication. The technique of analyzing the data using descriptive method. The types of register in this term will be separated into formal register and informal register, and identify the meaning of register.

  4. Locus of Word Frequency Effects in Spelling to Dictation: Still at the Orthographic Level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-01-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological…

  5. Phonological and Semantic Knowledge Are Causal Influences on Learning to Read Words in Chinese

    Science.gov (United States)

    Zhou, Lulin; Duff, Fiona J.; Hulme, Charles

    2015-01-01

    We report a training study that assesses whether teaching the pronunciation and meaning of spoken words improves Chinese children's subsequent attempts to learn to read the words. Teaching the pronunciations of words helps children to learn to read those same words, and teaching the pronunciations and meanings improves learning still further.…

  6. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    Science.gov (United States)

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  7. Words Get in the Way: Linguistic Effects on Talker Discrimination.

    Science.gov (United States)

    Narayan, Chandan R; Mak, Lorinda; Bialystok, Ellen

    2017-07-01

    A speech perception experiment provides evidence that the linguistic relationship between words affects the discrimination of their talkers. Listeners discriminated two talkers' voices with various linguistic relationships between their spoken words. Listeners were asked whether two words were spoken by the same person or not. Word pairs varied with respect to the linguistic relationship between the component words, forming either: phonological rhymes, lexical compounds, reversed compounds, or unrelated pairs. The degree of linguistic relationship between the words affected talker discrimination in a graded fashion, revealing biases listeners have regarding the nature of words and the talkers that speak them. These results indicate that listeners expect a talker's words to be linguistically related, and more generally, indexical processing is affected by linguistic information in a top-down fashion even when listeners are not told to attend to it. Copyright © 2016 Cognitive Science Society, Inc.

  8. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  9. Basic speech recognition for spoken dialogues

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-09-01

    Full Text Available Spoken dialogue systems (SDSs) have great potential for information access in the developing world. However, the realisation of that potential requires the solution of several challenging problems, including the development of sufficiently accurate...

  10. Iconic Factors and Language Word Order

    Science.gov (United States)

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  11. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  12. L2 Spelling Errors in Italian Children with Dyslexia.

    Science.gov (United States)

    Palladino, Paola; Cismondo, Dhebora; Ferrari, Marcella; Ballagamba, Isabella; Cornoldi, Cesare

    2016-05-01

    The present study aimed to investigate L2 spelling skills in Italian children by administering an English word dictation task to 13 children with dyslexia (CD), 13 control children (comparable in age, gender, schooling and IQ) and a group of 10 children with an English learning difficulty, but no L1 learning disorder. Patterns of difficulties were examined for accuracy and type of errors, in spelling dictated short and long words (i.e. disyllables and three syllables). Notably, CD were poor in spelling English words. Furthermore, their errors were mainly related with phonological representation of words, as they made more 'phonologically' implausible errors than controls. In addition, CD errors were more frequent for short than long words. Conversely, the three groups did not differ in the number of plausible ('non-phonological') errors, that is, words that were incorrectly written, but whose reading could correspond to the dictated word via either Italian or English rules. Error analysis also showed syllable position differences in the spelling patterns of CD, children with and English learning difficulty and control children. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Grapho-Morphological Awareness in Spanish L2 Reading: How Do Learners Use This Metalinguistic Skill?

    Science.gov (United States)

    Miguel, Nausica Marcos

    2012-01-01

    This paper contributes to the literature on the transferability of grapho-morphological awareness (GMA) for second language (L2) learners by analysing L2 learners' knowledge of morphology in reading. GMA helps readers to identify grammatical categories, infer meanings of unfamiliar words, and access stored lexical information. Previous research…

  14. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  15. Voice reinstatement modulates neural indices of continuous word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Backer, Kristina C; Alain, Claude

    2014-09-01

    The present study was designed to examine listeners' ability to use voice information incidentally during spoken word recognition. We recorded event-related brain potentials (ERPs) during a continuous recognition paradigm in which participants indicated on each trial whether the spoken word was "new" or "old." Old items were presented at 2, 8 or 16 words following the first presentation. Context congruency was manipulated by having the same word repeated by either the same speaker or a different speaker. The different speaker could share the gender, accent or neither feature with the word presented the first time. Participants' accuracy was greatest when the old word was spoken by the same speaker than by a different speaker. In addition, accuracy decreased with increasing lag. The correct identification of old words was accompanied by an enhanced late positivity over parietal sites, with no difference found between voice congruency conditions. In contrast, an earlier voice reinstatement effect was observed over frontal sites, an index of priming that preceded recollection in this task. Our results provide further evidence that acoustic and semantic information are integrated into a unified trace and that acoustic information facilitates spoken word recollection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Variation Patterns in Across-Word Regressive Assimilation in Picard: An Optimality Theoretic Account.

    Science.gov (United States)

    Cardoso, Walcir

    2001-01-01

    Offers an optimality theoretic account for the phonological process of across-word regressive assimilation (AWRA) in Picard, a Gallo-Romance dialect spoken in the Picardie region in Northern France and Southern Belgium. Focuses on the varieties spoken in the Vimeu region of France. Examines one particular topic in the analysis of AWRA: the…

  17. Novel word retention in sequential bilingual children.

    Science.gov (United States)

    Kan, Pui Fong

    2014-03-01

    Children's ability to learn and retain new words is fundamental to their vocabulary development. This study examined word retention in children learning a home language (L1) from birth and a second language (L2) in preschool settings. Participants were presented with sixteen novel words in L1 and in L2 and were tested for retention after either a 2-month or a 4-month delay. Results showed that children retained more words in L1 than in L2 for both of the retention interval conditions. In addition, children's word retention was associated with their existing language knowledge and their fast-mapping performance within and across language. The patterns of association, however, were different between L1 and L2. These findings suggest that children's word retention might be related to the interactions of various components that are operating within a dynamic system.

  18. L2 Selves, Emotions, and Motivated Behaviors

    Science.gov (United States)

    Teimouri, Yasser

    2017-01-01

    This study has aimed to investigate language learners' emotional experiences through the lens of L2 future self-guides. To that end, the L2 motivational self system was chosen as the theoretical framework to relate learners' emotions to their L2 selves. However, due to inconsistent results of past research concerning the motivational role of the…

  19. Novel second language words and asymmetric lexical access

    NARCIS (Netherlands)

    Escudero, P.; Hayes-Harb, R.; Mitterer, H.

    2008-01-01

    The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English

  20. Signal Words

    Science.gov (United States)

    SIGNAL WORDS TOPIC FACT SHEET NPIC fact sheets are designed to answer questions that are commonly asked by the ... making decisions about pesticide use. What are Signal Words? Signal words are found on pesticide product labels, ...

  1. Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults

    Science.gov (United States)

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…

  2. Proficiency and sentence constraint effects on second language word learning.

    Science.gov (United States)

    Ma, Tengfei; Chen, Baoguo; Lu, Chunming; Dunlap, Susan

    2015-07-01

    This paper presents an experiment that investigated the effects of L2 proficiency and sentence constraint on semantic processing of unknown L2 words (pseudowords). All participants were Chinese native speakers who learned English as a second language. In the experiment, we used a whole sentence presentation paradigm with a delayed semantic relatedness judgment task. Both higher and lower-proficiency L2 learners could make use of the high-constraint sentence context to judge the meaning of novel pseudowords, and higher-proficiency L2 learners outperformed lower-proficiency L2 learners in all conditions. These results demonstrate that both L2 proficiency and sentence constraint affect subsequent word learning among second language learners. We extended L2 word learning into a sentence context, replicated the sentence constraint effects previously found among native speakers, and found proficiency effects in L2 word learning. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    Science.gov (United States)

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  4. Measurements of Development in L2 Written Production: The Case of L2 Chinese

    Science.gov (United States)

    Jiang, Wenying

    2013-01-01

    This study investigates measures for second language (L2) writing development. A T-unit, which has been found the most satisfactory unit of analysis for measuring L2 development in English, is extended to measure L2 Chinese writing development through a cross-sectional design in this study. Data were collected from three L2 Chinese learner groups…

  5. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  6. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  7. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    window focused over the part which most likely contains an answer to the query. The two systems are integrated into a full spoken query answering system. The prototype can answer queries and questions within the chosen football (soccer) test domain, but the system has the flexibility for being ported...

  8. SPOKEN AYACUCHO QUECHUA, UNITS 11-20.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…

  9. SPOKEN CUZCO QUECHUA, UNITS 7-12.

    Science.gov (United States)

    SOLA, DONALD F.; AND OTHERS

    THIS SECOND VOLUME OF AN INTRODUCTORY COURSE IN SPOKEN CUZCO QUECHUA ALSO COMPRISES ENOUGH MATERIAL FOR ONE INTENSIVE SUMMER SESSION COURSE OR ONE SEMESTER OF SEMI-INTENSIVE INSTRUCTION (120 CLASS HOURS). THE METHOD OF PRESENTATION IS ESSENTIALLY THE SAME AS IN THE FIRST VOLUME WITH FURTHER CONTRASTIVE, LINGUISTIC ANALYSIS OF ENGLISH-QUECHUA…

  10. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    Science.gov (United States)

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  11. SPOKEN AYACUCHO QUECHUA. UNITS 1-10.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THIS BEGINNING COURSE IN AYACUCHO QUECHUA, SPOKEN BY ABOUT A MILLION PEOPLE IN SOUTH-CENTRAL PERU, WAS PREPARED TO INTRODUCE THE PHONOLOGY AND GRAMMAR OF THIS DIALECT TO SPEAKERS OF ENGLISH. THE FIRST OF TWO VOLUMES, IT SERVES AS A TEXT FOR A 6-WEEK INTENSIVE COURSE OF 20 CLASS HOURS A WEEK. THE AUTHORS COMPARE AND CONTRAST SIGNIFICANT FEATURES OF…

  12. A Grammar of Spoken Brazilian Portuguese.

    Science.gov (United States)

    Thomas, Earl W.

    This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…

  13. Towards Affordable Disclosure of Spoken Heritage Archives

    NARCIS (Netherlands)

    Larson, M; Ordelman, Roeland J.F.; Heeren, W.F.L.; Fernie, K; de Jong, Franciska M.G.; Huijbregts, M.A.H.; Oomen, J; Hiemstra, Djoerd

    2009-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken heritage archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, we at least want to

  14. Mapping Students' Spoken Conceptions of Equality

    Science.gov (United States)

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  15. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  16. Business Spoken English Learning Strategies for Chinese Enterprise Staff

    Institute of Scientific and Technical Information of China (English)

    Han Li

    2013-01-01

    This study addresses the issue of promoting effective Business Spoken English of Enterprise Staff in China.It aims to assess the assessment of spoken English learning methods and identify the difficulties of learning English oral expression concerned business area.It also provides strategies for enhancing Enterprise Staff’s level of Business Spoken English.

  17. The embodiment of emotional words in a second language: An eye-movement study.

    Science.gov (United States)

    Sheikh, Naveed A; Titone, Debra

    2016-01-01

    The hypothesis that word representations are emotionally impoverished in a second language (L2) has variable support. However, this hypothesis has only been tested using tasks that present words in isolation or that require laboratory-specific decisions. Here, we recorded eye movements for 34 bilinguals who read sentences in their L2 with no goal other than comprehension, and compared them to 43 first language readers taken from our prior study. Positive words were read more quickly than neutral words in the L2 across first-pass reading time measures. However, this emotional advantage was absent for negative words for the earliest measures. Moreover, negative words but not positive words were influenced by concreteness, frequency and L2 proficiency in a manner similar to neutral words. Taken together, the findings suggest that only negative words are at risk of emotional disembodiment during L2 reading, perhaps because a positivity bias in L2 experiences ensures that positive words are emotionally grounded.

  18. Theology of Jesus’ words from the cross

    Directory of Open Access Journals (Sweden)

    Bogdan Zbroja

    2012-09-01

    Full Text Available The article presents a theological message of the last words that Jesus spoke from the height of the cross. Layout content is conveyed in three kinds of Christ’s relations: the words addressed to God the Father; the words addressed to the good people standing by the cross; the so-called declarations that the Master had spoken to anyone but uttered them in general. All these words speak of the Master’s love. They express His full awareness of what is being done and of His decision voluntarily taken. Above all, it is revealed in the Lord’s statements His obedience to the will of God expressed in the inspired words of the Holy Scriptures. Jesus fulfills all the prophecies of the Old Testament by pronounced words and accomplished works that will become content of the New Testament.

  19. A real-time spoken-language system for interactive problem-solving, combining linguistic and statistical technology for improved spoken language understanding

    Science.gov (United States)

    Moore, Robert C.; Cohen, Michael H.

    1993-09-01

    Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.

  20. Tracking the time course of word-frequency effects in auditory word recognition with event-related potentials.

    Science.gov (United States)

    Dufour, Sophie; Brunellière, Angèle; Frauenfelder, Ulrich H

    2013-04-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to reflect mechanisms involved in word identification, was also examined. The ERP data showed a clear frequency effect as early as 350 ms from word onset on the P350, followed by a later effect at word offset on the late N400. A neighborhood density effect was also found at an early stage of spoken-word processing on the PMN, and at word offset on the late N400. Overall, our ERP differences for word frequency suggest that frequency affects the core processes of word identification starting from the initial phase of lexical activation and including target word selection. They thus rule out any interpretation of the word frequency effect that is limited to a purely decisional locus after word identification has been completed. Copyright © 2012 Cognitive Science Society, Inc.

  1. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  2. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  3. Modality differences between written and spoken story retelling in healthy older adults

    Directory of Open Access Journals (Sweden)

    Jessica Ann Obermeyer

    2015-04-01

    Methods: Ten native English speaking healthy elderly participants between the ages of 50 and 80 were recruited. Exclusionary criteria included neurological disease/injury, history of learning disability, uncorrected hearing or vision impairment, history of drug/alcohol abuse and presence of cognitive decline (based on Cognitive Linguistic Quick Test. Spoken and written discourse was analyzed for micro linguistic measures including total words, percent correct information units (CIUs; Nicholas & Brookshire, 1993 and percent complete utterances (CUs; Edmonds, et al. 2009. CIUs measure relevant and informative words while CUs focus at the sentence level and measure whether a relevant subject and verb and object (if appropriate are present. Results: Analysis was completed using Wilcoxon Rank Sum Test due to small sample size. Preliminary results revealed that healthy elderly people produced significantly more words in spoken retellings than written retellings (p=.000; however, this measure contrasted with %CIUs and %CUs with participants producing significantly higher %CIUs (p=.000 and %CUs (p=.000 in written story retellings than in spoken story retellings. Conclusion: These findings indicate that written retellings, while shorter, contained higher accuracy at both a word (CIU and sentence (CU level. This observation could be related to the ability to revise written text and therefore make it more concise, whereas the nature of speech results in more embellishment and “thinking out loud,” such as comments about the task, associated observations about the story, etc. We plan to run more participants and conduct a main concepts analysis (before conference time to gain more insight into modality differences and implications.

  4. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  5. The L2 Motivational Self System and L2 Achievement: A Study of Saudi EFL Learners

    Science.gov (United States)

    Moskovsky, Christo; Assulaimani, Turki; Racheva, Silvia; Harkins, Jean

    2016-01-01

    The research reported in this article explores the relationship between Dörnyei's (2005, 2009) Second Language Motivational Self System (L2MSS) and the L2 proficiency level of Saudi learners of English as a foreign language (EFL). Male and female participants (N = 360) responded to a questionnaire relating to the main components of L2MSS, the…

  6. Activation of words with phonological overlap

    Directory of Open Access Journals (Sweden)

    Claudia K. Friedrich

    2013-08-01

    Full Text Available Multiple lexical representations overlapping with the input (cohort neighbors are temporarily activated in the listener’s mental lexicon when speech unfolds in time. Activation for cohort neighbors appears to rapidly decline as soon as there is mismatch with the input. However, it is a matter of debate whether or not they are completely excluded from further processing. We recorded behavioral data and event-related brain potentials (ERPs in auditory-visual word onset priming during a lexical decision task. As primes we used the first two syllables of spoken German words. In a carrier word condition, the primes were extracted from spoken versions of the target words (ano-ANORAK 'anorak'. In a cohort neighbor condition, the primes were taken from words that overlap with the target word up to the second nucleus (ana- taken from ANANAS 'pineapple'. Relative to a control condition, where primes and targets were unrelated, lexical decision responses for cohort neighbors were delayed. This reveals that cohort neighbors are disfavored by the decision processes at the behavioral front end. In contrast, left-anterior ERPs reflected long-lasting facilitated processing of cohort neighbors. We interpret these results as evidence for extended parallel processing of cohort neighbors. That is, in parallel to the preparation and elicitation of delayed lexical decision responses to cohort neighbors, aspects of the processing system appear to keep track of those less efficient candidates.

  7. A Positivity Bias in Written and Spoken English and Its Moderation by Personality and Gender.

    Science.gov (United States)

    Augustine, Adam A; Mehl, Matthias R; Larsen, Randy J

    2011-09-01

    The human tendency to use positive words ("adorable") more often than negative words ("dreadful") is called the linguistic positivity bias. We find evidence for this bias in two studies of word use, one based on written corpora and another based on naturalistic speech samples. In addition, we demonstrate that the positivity bias applies to nouns and verbs as well as adjectives. We also show that it is found to the same degree in written as well as spoken English. Moreover, personality traits and gender moderate the effect, such that persons high on extraversion and agreeableness and women display a larger positivity bias in naturalistic speech. Results are discussed in terms of how the linguistic positivity bias may serve as a mechanism for social facilitation. People, in general, and some people more than others, tend to talk about the brighter side of life.

  8. Transcription and Annotation of a Japanese Accented Spoken Corpus of L2 Spanish for the Development of CAPT Applications

    Science.gov (United States)

    Carranza, Mario

    2016-01-01

    This paper addresses the process of transcribing and annotating spontaneous non-native speech with the aim of compiling a training corpus for the development of Computer Assisted Pronunciation Training (CAPT) applications, enhanced with Automatic Speech Recognition (ASR) technology. To better adapt ASR technology to CAPT tools, the recognition…

  9. Informational Packaging, Level of Formality, and the Use of Circumstance Adverbials in L1 and L2 Student Academic Presentations

    Science.gov (United States)

    Zareva, Alla

    2009-01-01

    The analysis of circumstance adverbials in this paper was based on L1 and L2 corpora of student presentations, each of which consisting of approximately 30,000 words. The overall goal of the investigation was to identify specific functions L1 and L2 college students attributed to circumstance adverbials (the most frequently used adverbial class in…

  10. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  11. Lexical availability of young Spanish EFL learners: emotion words versus non-emotion words

    OpenAIRE

    Jiménez Catalán, R.; Dewaele, Jean-Marc

    2017-01-01

    This study intends to contribute to L2 emotion vocabulary research by looking at the words that primary school EFL learners produce in response to prompts in a lexical availability task. Specifically, it aims to ascertain whether emotion prompts (Love, Hate, Happy and Sad) generate a greater number of words than non-emotion prompts (School and Animals). It also seeks to identify the words learners associate with each semantic category to determine whether the words produced in response to emo...

  12. Word order and information structure in Makhuwa-Enahara

    NARCIS (Netherlands)

    Wal, Guenever Johanna van der

    2009-01-01

    This thesis investigates the grammar of Makhuwa-Enahara, a Bantu language spoken in the north of Mozambique. The information structure is an influential factor in this language, determining the word order and the use of special conjugations known as conjoint and disjoint verb forms. The thesis

  13. Word order variation and foregrounding of complement clauses

    DEFF Research Database (Denmark)

    Christensen, Tanya Karoli; Jensen, Torben Juel

    2015-01-01

    Through mixed models analyses of complement clauses in a corpus of spoken Danish we examine the role of sentence adverbials in relation to a word order distinction in Scandinavian signalled by the relative position of sentence adverbials and finite verb (V>Adv vs. Adv>V). The type of sentence...

  14. On the Usability of Spoken Dialogue Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Bo

     This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...

  15. Suggestions of keeping L2 motivation

    Institute of Scientific and Technical Information of China (English)

    徐斌

    2014-01-01

    How will we keep the motivation during the second language as long as we can, which confuses us, though what the L2 motivation is and how it is developed have been discussed by the psychologists, educators, etc. The aim of this passage is to clarify the basic content of motivation, including the definition, classification, importance, etc. how the current situation is in se-nior high students’English learning motivation, what should be done to keep such motivation. In the following part, it will be read that the introduction, the chapter stating the content and classification of (L 2) motivation, the chapter analyzing the necessity and state of L2 motivation at senor high, the chapter offering approach to keep that motivation, and the conclusion. All of them use the comparison, explanation and cites.

  16. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  17. L2 writing and L2 written feedback in upper secondary schools as experienced by teachers

    OpenAIRE

    Manousou, Angeliki

    2015-01-01

    L2 written feedback is a multi-faceted issue and this is the reason behind the big number of studies that have been conducted on it. However, the majority of studies deal with learners’ opinions of teachers’ feedback or several types of feedback and their advantages and disadvantages. There are no studies that could have addressed teachers’ opinions of their L2 written feedback. This study attempts to describe how L2 teachers view their written feedback on learners’ essays. ...

  18. Speech Perception Engages a General Timer: Evidence from a Divided Attention Word Identification Task

    Science.gov (United States)

    Casini, Laurence; Burle, Boris; Nguyen, Noel

    2009-01-01

    Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…

  19. The locus of word frequency effects in skilled spelling-to-dictation.

    Science.gov (United States)

    Chua, Shi Min; Liow, Susan J Rickard

    2014-01-01

    In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.

  20. The visual-auditory color-word Stroop asymmetry and its time course

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2005-01-01

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect.

  1. Repeated imitation makes human vocalizations more word-like.

    Science.gov (United States)

    Edmiston, Pierce; Perlman, Marcus; Lupyan, Gary

    2018-03-14

    People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words. © 2018 The Author(s).

  2. The Effect of Online Translators on L2 Writing in French

    Science.gov (United States)

    O'Neill, Errol Marinus

    2012-01-01

    Online translation (OT) sites such as Free Translation and Babel Fish are freely available to the general public and purport to convert inputted text, from single words to entire paragraphs, instantly from one language to another. Discussion in the literature about OT for second-language (L2) acquisition has generally focused either on the…

  3. Incidental L2 Vocabulary Acquisition "from" and "while" Reading: An Eye-Tracking Study

    Science.gov (United States)

    Pellicer-Sánchez, Ana

    2016-01-01

    Previous studies have shown that reading is an important source of incidental second language (L2) vocabulary acquisition. However, we still do not have a clear picture of what happens when readers encounter unknown words. Combining offline (vocabulary tests) and online (eye-tracking) measures, the incidental acquisition of vocabulary knowledge…

  4. Word form Encoding in Chinese Word Naming and Word Typing

    Science.gov (United States)

    Chen, Jenn-Yeu; Li, Cheng-Yi

    2011-01-01

    The process of word form encoding was investigated in primed word naming and word typing with Chinese monosyllabic words. The target words shared or did not share the onset consonants with the prime words. The stimulus onset asynchrony (SOA) was 100 ms or 300 ms. Typing required the participants to enter the phonetic letters of the target word,…

  5. Does segmental overlap help or hurt? Evidence from blocked cyclic naming in spoken and written production.

    Science.gov (United States)

    Breining, Bonnie; Nozari, Nazbanou; Rapp, Brenda

    2016-04-01

    Past research has demonstrated interference effects when words are named in the context of multiple items that share a meaning. This interference has been explained within various incremental learning accounts of word production, which propose that each attempt at mapping semantic features to lexical items induces slight but persistent changes that result in cumulative interference. We examined whether similar interference-generating mechanisms operate during the mapping of lexical items to segments by examining the production of words in the context of others that share segments. Previous research has shown that initial-segment overlap amongst a set of target words produces facilitation, not interference. However, this initial-segment facilitation is likely due to strategic preparation, an external factor that may mask underlying interference. In the present study, we applied a novel manipulation in which the segmental overlap across target items was distributed unpredictably across word positions, in order to reduce strategic response preparation. This manipulation led to interference in both spoken (Exp. 1) and written (Exp. 2) production. We suggest that these findings are consistent with a competitive learning mechanism that applies across stages and modalities of word production.

  6. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    Science.gov (United States)

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  7. L2 Chinese: Grammatical Development and Processing

    Science.gov (United States)

    Mai, Ziyin

    2016-01-01

    Two recent books (Jiang, 2014, "Advances in Chinese as a second language"; Wang, 2013, "Grammatical development of Chinese among non-native speakers") provide new resources for exploring the role of processing in acquiring Chinese as a second language (L2). This review article summarizes, assesses and compares some of the…

  8. Investigating L2 Performance in Text Chat

    Science.gov (United States)

    Sauro, Shannon; Smith, Bryan

    2010-01-01

    This study examines the linguistic complexity and lexical diversity of both overt and covert L2 output produced during synchronous written computer-mediated communication, also referred to as chat. Video enhanced chatscripts produced by university learners of German (N = 23) engaged in dyadic task-based chat interaction were coded and analyzed for…

  9. A Brief Analysis of L2 Motivation

    Institute of Scientific and Technical Information of China (English)

    武曼

    2017-01-01

    Learning motivation is the motility to urge the student to be engaged in the studies activity. This thesis has introduced types of L2 learning motivation. Through the contrast and analysis about the positive motivation and negative motivation in prac?tical applications, this paper has indicated the unique of positive motivation and the defects of negative motivation, and make rea?sonable suggestions.

  10. MD2725: 16L2 aperture measurement

    CERN Document Server

    Mirarchi, Daniele; Rossi, Roberto; CERN. Geneva. ATS Department

    2018-01-01

    Dumps induced by sudden increase of losses in the half-cell 16L2 have been a serious machine limitation during the 2017 run. The aim of this MD was to perform local aperture measurements in order to assess differences after the beam screen regeneration, compared to first measurements in 2017.

  11. Measuring Syntactic Complexity in Spontaneous Spoken Swedish

    Science.gov (United States)

    Roll, Mikael; Frid, Johan; Horne, Merle

    2007-01-01

    Hesitation disfluencies after phonetically prominent stranded function words are thought to reflect the cognitive coding of complex structures. Speech fragments following the Swedish function word "att" "that" were analyzed syntactically, and divided into two groups: one with "att" in disfluent contexts, and the other with "att" in fluent…

  12. Spoken Grammar: Where Are We and Where Are We Going?

    Science.gov (United States)

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  13. Gaming as Extramural English L2 Learning and L2 Proficiency among Young Learners

    Science.gov (United States)

    Sylven, Liss Kerstin; Sundqvist, Pia

    2012-01-01

    Today, playing digital games is an important part of many young people's everyday lives. Claims have been made that certain games, in particular massively multiplayer online role-playing games (MMORPGs) provide L2 English learners with a linguistically rich and cognitively challenging virtual environment that may be conducive to L2 learning, as…

  14. The Impact of Resilience on L2 Learners' Motivated Behaviour and Proficiency in L2 Learning

    Science.gov (United States)

    Kim, Tae-Young; Kim, Yoon-Kyoung

    2017-01-01

    This exploratory study focused on the factors that constitute second language (L2) learners' resilience, and how these factors are related to L2 learning by investigating what relation resilience may have to motivated behaviour and proficiency in English learning. A total of 1620 secondary school learners of English participated in a questionnaire…

  15. L2-L1 Translation Priming Effects in a Lexical Decision Task: Evidence From Low Proficient Korean-English Bilinguals

    Directory of Open Access Journals (Sweden)

    Yoonhyoung Lee

    2018-03-01

    Full Text Available One of the key issues in bilingual lexical representation is whether L1 processing is facilitated by L2 words. In this study, we conducted two experiments using the masked priming paradigm to examine how L2-L1 translation priming effects emerge when unbalanced, low proficiency, Korean-English bilinguals performed a lexical decision task. In Experiment 1, we used a 150 ms SOA (50 ms prime duration followed by a blank interval of 100 ms and found a significant L2-L1 translation priming effect. In contrast, in Experiment 2, we used a 60 ms SOA (50 ms prime duration followed by a blank interval of 10 ms and found a null effect of L2-L1 translation priming. This finding is the first demonstration of a significant L2-L1 translation priming effect with unbalanced Korean-English bilinguals. Implications of this work are discussed with regard to bilingual word recognition models.

  16. Acquisition of L2 Japanese Geminates: Training with Waveform Displays

    Directory of Open Access Journals (Sweden)

    Miki Motohashi-Saigo

    2009-06-01

    Full Text Available The value of waveform displays as visual feedback was explored in a training study involving perception and production of L2 Japanese by beginning-level L1 English learners. A pretest-posttest design compared auditory-visual (AV and auditory-only (A-only Web-based training. Stimuli were singleton and geminate /t,k,s/ followed by /a,u/ in two conditions (isolated words, carrier sentences. Fillers with long vowels were included. Participants completed a forced-choice identification task involving minimal triplets: singletons, geminates, long vowels (e.g., sasu, sassu, saasu. Results revealed a significant improvement in geminate identification following training, especially for AV; b significant effect of geminate (lowest scores for /s/; c no significant effect of condition; and d no significant improvement for the control group. Most errors were misperceptions of geminates as long vowels. Test of generalization revealed 5% decline in accuracy for AV and 14% for A-only. Geminate production improved significantly (especially for AV based on rater judgments; improvement was greatest for /k/ and smallest for /s/. Most production errors involved substitution of a singleton for a geminate. Post-study interviews produced positive comments on Web-based training. Waveforms increased awareness of durational differences. Results support the effectiveness of auditory-visual input in L2 perception training with transfer to novel stimuli and improved production.

  17. Chinese Learners of English See Chinese Words When Reading English Words.

    Science.gov (United States)

    Ma, Fengyang; Ai, Haiyang

    2018-06-01

    The present study examines when second language (L2) learners read words in the L2, whether the orthography and/or phonology of the translation words in the first language (L1) is activated and whether the patterns would be modulated by the proficiency in the L2. In two experiments, two groups of Chinese learners of English immersed in the L1 environment, one less proficient and the other more proficient in English, performed a translation recognition task. In this task, participants judged whether pairs of words, with an L2 word preceding an L1 word, were translation words or not. The critical conditions compared the performance of learners to reject distractors that were related to the translation word (e.g., , pronounced as /bei 1/) of an L2 word (e.g., cup) in orthography (e.g., , bad in Chinese, pronounced as /huai 4/) or phonology (e.g., , sad in Chinese, pronounced as /bei 1/). Results of Experiment 1 showed less proficient learners were slower and less accurate to reject translation orthography distractors, as compared to unrelated controls, demonstrating a robust translation orthography interference effect. In contrast, their performance was not significantly different when rejecting translation phonology distractors, relative to unrelated controls, showing no translation phonology interference. The same patterns were observed in more proficient learners in Experiment 2. Together, these results suggest that when Chinese learners of English read English words, the orthographic information, but not the phonological information of the Chinese translation words is activated. In addition, this activation is not modulated by L2 proficiency.

  18. Development and implementation of a novel assay for L-2-hydroxyglutarate dehydrogenase (L-2-HGDH) in cell lysates: L-2-HGDH deficiency in 15 patients with L-2-hydroxyglutaric aciduria

    DEFF Research Database (Denmark)

    Kranendijk, M; Salomons, G S; Gibson, K M

    2009-01-01

    L-2-hydroxyglutaric aciduria (L-2-HGA) is a rare inherited autosomal recessive neurometabolic disorder caused by mutations in the gene encoding L-2-hydroxyglutarate dehydrogenase. An assay to evaluate L-2-hydroxyglutarate dehydrogenase (L-2-HGDH) activity in fibroblast, lymphoblast and/or lymphoc...

  19. Word classes

    DEFF Research Database (Denmark)

    Rijkhoff, Jan

    2007-01-01

    in grammatical descriptions of some 50 languages, which together constitute a representative sample of the world’s languages (Hengeveld et al. 2004: 529). It appears that there are both quantitative and qualitative differences between word class systems of individual languages. Whereas some languages employ...... a parts-of-speech system that includes the categories Verb, Noun, Adjective and Adverb, other languages may use only a subset of these four lexical categories. Furthermore, quite a few languages have a major word class whose members cannot be classified in terms of the categories Verb – Noun – Adjective...... – Adverb, because they have properties that are strongly associated with at least two of these four traditional word classes (e.g. Adjective and Adverb). Finally, this article discusses some of the ways in which word class distinctions interact with other grammatical domains, such as syntax and morphology....

  20. Retention of New Words: Quantity of Encounters, Quality of Task, and Degree of Knowledge

    Science.gov (United States)

    Laufer, Batia; Rozovski-Roitblat, Bella

    2015-01-01

    We examined how learning new second language (L2) words was affected by three "task type" conditions (reading only, reading with a dictionary, reading and word focused exercises), three "number of encounters" conditions and their combinations. Three groups of L2 learners (n = 185) were exposed to 30 target words (one group in…

  1. The Influence of Topic Status on Written and Spoken Sentence Production

    Science.gov (United States)

    Cowles, H. Wind; Ferreira, Victor S.

    2012-01-01

    Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production. PMID:22408281

  2. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  3. Stimulus-independent semantic bias misdirects word recognition in older adults.

    Science.gov (United States)

    Rogers, Chad S; Wingfield, Arthur

    2015-07-01

    Older adults' normally adaptive use of semantic context to aid in word recognition can have a negative consequence of causing misrecognitions, especially when the word actually spoken sounds similar to a word that more closely fits the context. Word-pairs were presented to young and older adults, with the second word of the pair masked by multi-talker babble varying in signal-to-noise ratio. Results confirmed older adults' greater tendency to misidentify words based on their semantic context compared to the young adults, and to do so with a higher level of confidence. This age difference was unaffected by differences in the relative level of acoustic masking.

  4. Young toddlers' word comprehension is flexible and efficient.

    Directory of Open Access Journals (Sweden)

    Elika Bergelson

    Full Text Available Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children's gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers' spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called "cup" or "juice" and pictures that had only one likely name for toddlers (such as "apple", using a visual world eye-tracking task and a picture-labeling task (n = 77, mean age, 21 months. Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents.

  5. GLI ERRORI DI ITALIANO L1 ED L2: INTERFERENZA E APPRENDIMENTO

    Directory of Open Access Journals (Sweden)

    Rosaria Solarino

    2011-02-01

    Full Text Available Si può oggi affrontare il tema degli errori di italiano da una prospettiva che possa giovare contemporaneamente a docenti di italiano L1 ed L2? Noi pensiamo di sì: la ricerca glottodidattica sembra aver ormai apprestato un terreno comune alle due situazioni di apprendimento, sgombrando il campo da vecchi pregiudizi e distinzioni che appaiono ormai superate. Attraverso la contrapposizione di concetti quali “lingua parlata/lingua scritta”,  “errori di lingua / errori di linguaggio”, “apprendimento spontaneo/apprendimento guidato”, “italiano L1/italiano L2”, “errori di apprendimento/errori di interferenza, si indicano diversi criteri per la interpretazione degli errori e la loro valutazione in relazione alle cause, alle situazioni comunicative, ai contesti o allo stadio di evoluzione dell’apprendimento della lingua.     Errors in italian L1 and L2: interference and learning   Can errors in Italian be approached in a way that benefits both L1 and L2 Italian teachers? We believe so: glottodidactic research seems to have prepared a common terrain for these two learning situations, clearing the field of old prejudices and obsolete distinctions.  Through the juxtaposition of concepts like “spoken language/written language”, “language errors/speech errors”, “spontaneous learning/guided learning”, “L1 Italian/L2 Italian”, “learning errors/interference errors”, different criteria for interpreting errors and evaluating them in relation to their causes, to communicative situations, to contexts and the developmental state in learning a language are singled out.

  6. Lexical Availability of Young Spanish EFL Learners: Emotion Words versus Non-Emotion Words

    Science.gov (United States)

    Jiménez Catalán, Rosa M.; Dewaele, Jean-Marc

    2017-01-01

    This study intends to contribute to L2 emotion vocabulary research by looking at the words that primary-school English as foreign language learners produce in response to prompts in a lexical availability task. Specifically, it aims to ascertain whether emotion prompts (Love, Hate, Happy and Sad) generate a greater number of words than non-emotion…

  7. ASTER L2 Surface Reflectance SWIR and ASTER L2 Surface Reflectance VNIR V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The ASTER L2 Surface Reflectance is a multi-file product that contains atmospherically corrected data for both the Visible Near-Infrared (VNIR) and Shortwave...

  8. Immunogenicity of an HPV-16 L2 DNA vaccine

    Science.gov (United States)

    Hitzeroth, Inga I.; Passmore, Jo-Ann S.; Shephard, Enid; Stewart, Debbie; Müller, Martin; Williamson, Anna-Lise; Rybicki, Edward P.; Kast, W. Martin

    2009-01-01

    The ability to elicit cross-neutralizing antibodies makes human papillomavirus (HPV) L2 capsid protein a possible HPV vaccine. We examined and compared the humoral response of mice immunised with a HPV-16 L2 DNA vaccine or with HPV-16 L2 protein. The L2 DNA vaccine elicited a non-neutralising antibody response unlike the L2 protein. L2 DNA vaccination suppressed the growth of L2-expressing C3 tumor cells, which is a T cell mediated effect, demonstrating that the lack of non-neutralizing antibody induction by L2 DNA was not caused by lack of T cell immunogenicity of the construct. PMID:19559114

  9. Periodic words connected with the Fibonacci words

    Directory of Open Access Journals (Sweden)

    G. M. Barabash

    2016-06-01

    Full Text Available In this paper we introduce two families of periodic words (FLP-words of type 1 and FLP-words of type 2 that are connected with the Fibonacci words and investigated their properties.

  10. Orthogonal Multiwavelet Frames in L2Rd

    Directory of Open Access Journals (Sweden)

    Liu Zhanwei

    2012-01-01

    Full Text Available We characterize the orthogonal frames and orthogonal multiwavelet frames in L2Rd with matrix dilations of the form (Df(x=detAf(Ax, where A is an arbitrary expanding d×d matrix with integer coefficients. Firstly, through two arbitrarily multiwavelet frames, we give a simple construction of a pair of orthogonal multiwavelet frames. Then, by using the unitary extension principle, we present an algorithm for the construction of arbitrarily many orthogonal multiwavelet tight frames. Finally, we give a general construction algorithm for orthogonal multiwavelet tight frames from a scaling function.

  11. ECR heating in L-2M stellarator

    International Nuclear Information System (INIS)

    Grebenshchikov, S.E.; Batanov, G.M.; Fedyanin, O.I.

    1995-01-01

    The first results of ECH experiments in the L-2M stellarator are presented. The main goal of the experiments is to investigate the physics of ECH and plasma confinement at very high values of the volume heating power density. A current free plasma is produced and heated by extraordinary waves at the second harmonic of the electron cyclotron frequency. The experimental results are compared with the numerical simulations of plasma confinement and heating processes based on neoclassical theory using the full matrix of transport coefficients and with LHD-scaling. 4 refs., 2 figs

  12. Learning words

    DEFF Research Database (Denmark)

    Jaswal, Vikram K.; Hansen, Mikkel

    2006-01-01

    Children tend to infer that when a speaker uses a new label, the label refers to an unlabeled object rather than one they already know the label for. Does this inference reflect a default assumption that words are mutually exclusive? Or does it instead reflect the result of a pragmatic reasoning...... process about what the speaker intended? In two studies, we distinguish between these possibilities. Preschoolers watched as a speaker pointed toward (Study 1) or looked at (Study 2) a familiar object while requesting the referent for a new word (e.g. 'Can you give me the blicket?'). In both studies......, despite the speaker's unambiguous behavioral cue indicating an intent to refer to a familiar object, children inferred that the novel label referred to an unfamiliar object. These results suggest that children expect words to be mutually exclusive even when a speaker provides some kinds of pragmatic...

  13. Effects of Word Frequency and Transitional Probability on Word Reading Durations of Younger and Older Speakers.

    Science.gov (United States)

    Moers, Cornelia; Meyer, Antje; Janse, Esther

    2017-06-01

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.

  14. The impact of music on learning and consolidation of novel words.

    Science.gov (United States)

    Tamminen, Jakke; Rastle, Kathleen; Darby, Jess; Lucas, Rebecca; Williamson, Victoria J

    2017-01-01

    Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.

  15. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  16. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  17. English Loan Words in the Speech of Six-Year-Old Navajo Children, with Supplement-Concordance.

    Science.gov (United States)

    Holm, Agnes; And Others

    As part of a study of the feasibility and effect of teaching Navajo children to read their own language first, preliminary data on English loan words in the speech of 6-year-old Navajos were gathered in this study of the language of over 200 children. Taped interviews with these children were analyzed, and a spoken word count of all English words…

  18. Does Set for Variability Mediate the Influence of Vocabulary Knowledge on the Development of Word Recognition Skills?

    Science.gov (United States)

    Tunmer, William E.; Chapman, James W.

    2012-01-01

    This study investigated the hypothesis that vocabulary influences word recognition skills indirectly through "set for variability", the ability to determine the correct pronunciation of approximations to spoken English words. One hundred forty children participating in a 3-year longitudinal study were administered reading and…

  19. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  20. A Spoken English Recognition Expert System.

    Science.gov (United States)

    1983-09-01

    34Speech Recognition by Computer," Scientific American. New York: Scientific American, April 1981: 64-76. 16. Marcus, Mitchell P. A Theo of Syntactic...prob)...) Pcssible words for voice decoder to choose from are: gents dishes issues itches ewes folks foes comunications units eunichs error * farce

  1. Negative evidence in L2 acquisition

    Directory of Open Access Journals (Sweden)

    Anne Dahl

    2005-02-01

    Full Text Available This article deals with the L2 acquisition of differences between Norwegian and English passives, and presents data to show that the acquisition of these differences by Norwegian L2 acquirers of English cannot be fully explained by positive evidence, cues, conservativism or economy. Rather, it is argued, it is natural to consider whether indirect negative evidence may facilitate acquisition by inferencing. The structures in focus are impersonal passive constructions with postverbal NPs and passive constructions with intransitive verbs. These sentences are ungrammatical in English. Chomsky (1981 proposes that this is a result of passive morphology absorbing objective case in English. There is no such case to be assigned to the postverbal NP in impersonal passives. In passive constructions with intransitive verbs, the verb does not assign objective case, so that there is no case for the passive morphology to absorb. Thus, impersonal passives have to be changed into personal passives, where the NP receives nominative case, and the objective case is free to go to the passive morphology. Intransitive verbs, however, cannot be used in the passive voice at all. Both the structures discussed in this article, i.e. are grammatical in Norwegian. However, the options available in English, viz. personal passives and active sentences, are equally possible. Åfarli (1992 therefore proposes that Norwegian has optional case absorption (passive morphology optionally absorbs case. On the basis on such observations, we may propose a parameter with the settings [+case absorption] for English, and [-case absorption], signifying optional case absorption, for Norwegian. This means that none of the structures that are grammatical in English can function as positive evidence for the [+case absorption] setting, since they are also grammatical in optional case absorption languages. The question is how this parameter is set.

  2. Does "Word Coach" Coach Words?

    Science.gov (United States)

    Cobb, Tom; Horst, Marlise

    2011-01-01

    This study reports on the design and testing of an integrated suite of vocabulary training games for Nintendo[TM] collectively designated "My Word Coach" (Ubisoft, 2008). The games' design is based on a wide range of learning research, from classic studies on recycling patterns to frequency studies of modern corpora. Its general usage…

  3. Word translation at three levels of proficiency in a second language: the ubiquitous involvement of conceptual memory

    NARCIS (Netherlands)

    de Groot, A.M.B.; Poot, R.

    1997-01-01

    Three groups of 20 unbalanced bilinguals, different from one another in second language (L2) fluency, translated one set of words from L1, Dutch, to L2, English (forward translation), and a second set of matched words from L2 to L1 (backward translation). In both language sets we orthogonally

  4. Word wheels

    CERN Document Server

    Clark, Kathryn

    2013-01-01

    Targeting the specific problems learners have with language structure, these multi-sensory exercises appeal to all age groups including adults. Exercises use sight, sound and touch and are also suitable for English as an Additional Lanaguage and Basic Skills students.Word Wheels includes off-the-shelf resources including lesson plans and photocopiable worksheets, an interactive CD with practice exercises, and support material for the busy teacher or non-specialist staff, as well as homework activities.

  5. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Technology-Enhanced L2 Reading: The Effects of Hierarchical Phrase Segmentation

    OpenAIRE

    Park, Youngmin

    2016-01-01

    Sentence processing skills contribute to successful L2 reading, and require understanding and manipulation of the order and interdependence of words within a sentence. Acknowledging that this syntactic awareness is significant but challenging in language development, a small number of researchers have explored whether modified text format increases reading abilities by raising syntactic awareness. This mixed within- and between-subjects study aims to examine whether phrase-segmented format, s...

  7. Limitations of the influence of English phonetics and phonology on L2 Spanish rhotics

    Directory of Open Access Journals (Sweden)

    Michael Kevin Olsen

    2016-12-01

    Full Text Available This study investigates L2 Spanish rhotic production in intermediate learners of Spanish, specifically addressing the duration of the influence of L1 English rhotic articulations and a phonetic environment involving English taps on the acquisition of Spanish taps and trills that Olsen (2012 found. Results from multiple linear regressions involving thirty-five students in Spanish foreign language classes show that the effect of English rhotic articulations evident in beginners has disappeared after four semesters of Spanish study. However, results from paired samples t-tests show that these more advanced learners produced accurate taps significantly more in words containing phonetic environments that produce taps in English. This effect is taken as evidence that L1 phonetic influences have a shorter duration on L2 production than do L1 phonological influences. These results provide insights into L2 rhotic acquisition which Spanish educators and students can use to formulate reasonable pronunciation expectations.

  8. Psycholinguistic norms for action photographs in French and their relationships with spoken and written latencies.

    Science.gov (United States)

    Bonin, Patrick; Boyer, Bruno; Méot, Alain; Fayol, Michel; Droit, Sylvie

    2004-02-01

    A set of 142 photographs of actions (taken from Fiez & Tranel, 1997) was standardized in French on name agreement, image agreement, conceptual familiarity, visual complexity, imageability, age of acquisition, and duration of the depicted actions. Objective word frequency measures were provided for the infinitive modal forms of the verbs and for the cumulative frequency of the verbal forms associated with the photographs. Statistics on the variables collected for action items were provided and compared with the statistics on the same variables collected for object items. The relationships between these variables were analyzed, and certain comparisons between the current database and other similar published databases of pictures of actions are reported. Spoken and written naming latencies were also collected for the photographs of actions, and multiple regression analyses revealed that name agreement, image agreement, and age of acquisition are the major determinants of action naming speed. Finally, certain analyses were performed to compare object and action naming times. The norms and the spoken and written naming latencies corresponding to the pictures are available on the Internet (http://www.psy.univ-bpclermont.fr/~pbonin/pbonin-eng.html) and should be of great use to researchers interested in the processing of actions.

  9. Could a Multimodal Dictionary Serve as a Learning Tool? An Examination of the Impact of Technologically Enhanced Visual Glosses on L2 Text Comprehension

    Science.gov (United States)

    Sato, Takeshi

    2016-01-01

    This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it…

  10. ASC Trilab L2 Codesign Milestone 2015

    Energy Technology Data Exchange (ETDEWEB)

    Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Simon David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinge, Dennis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lin, Paul T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vaughan, Courtenay T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cook, Jeanine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Edwards, Harold C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajan, Mahesh [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoekstra, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    For the FY15 ASC L2 Trilab Codesign milestone Sandia National Laboratories performed two main studies. The first study investigated three topics (performance, cross-platform portability and programmer productivity) when using OpenMP directives and the RAJA and Kokkos programming models available from LLNL and SNL respectively. The focus of this first study was the LULESH mini-application developed and maintained by LLNL. In the coming sections of the report the reader will find performance comparisons (and a demonstration of portability) for a variety of mini-application implementations produced during this study with varying levels of optimization. Of note is that the implementations utilized including optimizations across a number of programming models to help ensure claims that Kokkos can provide native-class application performance are valid. The second study performed during FY15 is a performance assessment of the MiniAero mini-application developed by Sandia. This mini-application was developed by the SIERRA Thermal-Fluid team at Sandia for the purposes of learning the Kokkos programming model and so is available in only a single implementation. For this report we studied its performance and scaling on a number of machines with the intent of providing insight into potential performance issues that may be experienced when similar algorithms are deployed on the forthcoming Trinity ASC ATS platform.

  11. L2, the minor capsid protein of papillomavirus

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Joshua W. [Department of Pathology, The Johns Hopkins University, Baltimore, MD 21287 (United States); Roden, Richard B.S., E-mail: roden@jhmi.edu [Department of Pathology, The Johns Hopkins University, Baltimore, MD 21287 (United States); Department of Oncology, The Johns Hopkins University, Baltimore, MD 21287 (United States); Department of Gynecology and Obstetrics, The Johns Hopkins University, Baltimore, MD 21287 (United States)

    2013-10-15

    The capsid protein L2 plays major roles in both papillomavirus assembly and the infectious process. While L1 forms the majority of the capsid and can self-assemble into empty virus-like particles (VLPs), L2 is a minor capsid component and lacks the capacity to form VLPs. However, L2 co-assembles with L1 into VLPs, enhancing their assembly. L2 also facilitates encapsidation of the ∼8 kbp circular and nucleosome-bound viral genome during assembly of the non-enveloped T=7d virions in the nucleus of terminally differentiated epithelial cells, although, like L1, L2 is not detectably expressed in infected basal cells. With respect to infection, L2 is not required for particles to bind to and enter cells. However L2 must be cleaved by furin for endosome escape. L2 then travels with the viral genome to the nucleus, wherein it accumulates at ND-10 domains. Here, we provide an overview of the biology of L2. - Highlights: • L2 is the minor antigen of the non-enveloped T=7d icosahedral Papillomavirus capsid. • L2 is a nuclear protein that can traffic to ND-10 and facilitate genome encapsidation. • L2 is critical for infection and must be cleaved by furin. • L2 is a broadly protective vaccine antigen recognized by neutralizing antibodies.

  12. L2, the minor capsid protein of papillomavirus

    International Nuclear Information System (INIS)

    Wang, Joshua W.; Roden, Richard B.S.

    2013-01-01

    The capsid protein L2 plays major roles in both papillomavirus assembly and the infectious process. While L1 forms the majority of the capsid and can self-assemble into empty virus-like particles (VLPs), L2 is a minor capsid component and lacks the capacity to form VLPs. However, L2 co-assembles with L1 into VLPs, enhancing their assembly. L2 also facilitates encapsidation of the ∼8 kbp circular and nucleosome-bound viral genome during assembly of the non-enveloped T=7d virions in the nucleus of terminally differentiated epithelial cells, although, like L1, L2 is not detectably expressed in infected basal cells. With respect to infection, L2 is not required for particles to bind to and enter cells. However L2 must be cleaved by furin for endosome escape. L2 then travels with the viral genome to the nucleus, wherein it accumulates at ND-10 domains. Here, we provide an overview of the biology of L2. - Highlights: • L2 is the minor antigen of the non-enveloped T=7d icosahedral Papillomavirus capsid. • L2 is a nuclear protein that can traffic to ND-10 and facilitate genome encapsidation. • L2 is critical for infection and must be cleaved by furin. • L2 is a broadly protective vaccine antigen recognized by neutralizing antibodies

  13. Cognate effects in sentence context depend on word class, L2 proficiency, and task

    NARCIS (Netherlands)

    Bultena, S.S.; Dijkstra, A.F.J.; Hell, J.G. van

    2014-01-01

    Noun translation equivalents that share orthographic and semantic features, called "cognates", are generally recognized faster than translation equivalents without such overlap. This cognate effect, which has also been obtained when cognates and noncognates were embedded in a sentence context,

  14. Processing emotional words in two languages with one brain: ERP and fMRI evidence from Chinese-English bilinguals.

    Science.gov (United States)

    Chen, Peiyao; Lin, Jie; Chen, Bingle; Lu, Chunming; Guo, Taomei

    2015-10-01

    Emotional words in a bilingual's second language (L2) seem to have less emotional impact compared to emotional words in the first language (L1). The present study examined the neural mechanisms of emotional word processing in Chinese-English bilinguals' two languages by using both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Behavioral results show a robust positive word processing advantage in L1 such that responses to positive words were faster and more accurate compared to responses to neutral words and negative words. In L2, emotional words only received higher accuracies than neutral words. In ERPs, positive words elicited a larger early posterior negativity and a smaller late positive component than neutral words in L1, while a trend of reduced N400 component was found for positive words compared to neutral words in L2. In fMRI, reduced activation was found for L1 emotional words in both the left middle occipital gyrus and the left cerebellum whereas increased activation in the left cerebellum was found for L2 emotional words. Altogether, these results suggest that emotional word processing advantage in L1 relies on rapid and automatic attention capture while facilitated semantic retrieval might help processing emotional words in L2. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. When does word frequency influence written production?

    Science.gov (United States)

    Baus, Cristina; Strijkers, Kristof; Costa, Albert

    2013-01-01

    The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  16. When does word frequency influence written production?

    Directory of Open Access Journals (Sweden)

    Cristina eBaus

    2013-12-01

    Full Text Available The aim of the present study was to explore the central (e.g., lexical processing and peripheral processes (motor preparation and execution underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: 1 first keystroke latency and 2 keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analysed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals. The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  17. Automatization and Orthographic Development in Second Language Visual Word Recognition

    Science.gov (United States)

    Kida, Shusaku

    2016-01-01

    The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…

  18. Word Domain Disambiguation via Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.

    2006-06-04

    Word subject domains have been widely used to improve the perform-ance of word sense disambiguation al-gorithms. However, comparatively little effort has been devoted so far to the disambiguation of word subject do-mains. The few existing approaches have focused on the development of al-gorithms specific to word domain dis-ambiguation. In this paper we explore an alternative approach where word domain disambiguation is achieved via word sense disambiguation. Our study shows that this approach yields very strong results, suggesting that word domain disambiguation can be ad-dressed in terms of word sense disam-biguation with no need for special purpose algorithms.

  19. L2TTMON Monitoring Program for L2 Topological Trigger in H1 Experiment - User's Guide

    International Nuclear Information System (INIS)

    Banas, E.; Ducorps, A.

    1999-01-01

    Monitoring software for the L2 Topological Trigger in H1 experiment consists of two parts working on two different computers. The hardware read-out and data processing is done on a fast FIC 8234 computer working with the OS9 real time operating system. The Macintosh Quadra is used as a Graphic User Interface for accessing the OS9 trigger monitoring software. The communication between both computers is based on the parallel connection between the Macintosh and the VME crate, where the FIC computer is placed. The special designed protocol (client-server) is used to communicate between both nodes. The general scheme of monitoring for the L2 Topological Trigger and detailed description of using of the monitoring software in both nodes are given in this guide. (author)

  20. The Development of Conjunction Use in Advanced L2 Speech

    Science.gov (United States)

    Jaroszek, Marcin

    2011-01-01

    The article discusses the results of a longitudinal study of how the use of conjunctions, as an aspect of spoken discourse competence of 13 selected advanced students of English, developed throughout their 3-year English as a foreign language (EFL) tertiary education. The analysis was carried out in relation to a number of variables, including 2…

  1. Electronic Control System Of Home Appliances Using Speech Command Words

    Directory of Open Access Journals (Sweden)

    Aye Min Soe

    2015-06-01

    Full Text Available Abstract The main idea of this paper is to develop a speech recognition system. By using this system smart home appliances are controlled by spoken words. The spoken words chosen for recognition are Fan On Fan Off Light On Light Off TV On and TV Off. The input of the system takes speech signals to control home appliances. The proposed system has two main parts speech recognition and smart home appliances electronic control system. Speech recognition is implemented in MATLAB environment. In this process it contains two main modules feature extraction and feature matching. Mel Frequency Cepstral Coefficients MFCC is used for feature extraction. Vector Quantization VQ approach using clustering algorithm is applied for feature matching. In electrical home appliances control system RF module is used to carry command signal from PC to microcontroller wirelessly. Microcontroller is connected to driver circuit for relay and motor. The input commands are recognized very well. The system is a good performance to control home appliances by spoken words.

  2. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    model (JSM), developed using Sequitur16,17 and trained on the CMUDict0.7b18 Amer- ican English dictionary (over 134k words), was used to detect English ...modeled using the closest Swahili vowel or vowel combination. In both cases these English L2P predictions were added to a dictionary as variants to swa... English queries as a function of overlap/correspondence with an existing reference English pronunciation dictionary . As the reference dictionary , we

  3. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  4. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  5. The effects of sad prosody on hemispheric specialization for words processing.

    Science.gov (United States)

    Leshem, Rotem; Arzouan, Yossi; Armony-Sivan, Rinat

    2015-06-01

    This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170 ms) and late (240-260 ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Evaluating Bilingual and Monolingual Dictionaries for L2 Learners.

    Science.gov (United States)

    Hunt, Alan

    1997-01-01

    A discussion of dictionaries and their use for second language (L2) learning suggests that lack of computerized modern language corpora can adversely affect bilingual dictionaries, commonly used by L2 learners, and shows how use of such corpora has benefitted two contemporary monolingual L2 learner dictionaries (1995 editions of the Longman…

  7. An overview of L-2-hydroxyglutarate dehydrogenase gene (L2HGDH) variants: a genotype-phenotype study

    DEFF Research Database (Denmark)

    Steenweg, Marjan E; Jakobs, Cornelis; Errami, Abdellatif

    2010-01-01

    L-2-Hydroxyglutaric aciduria (L2HGA) is a rare, neurometabolic disorder with an autosomal recessive mode of inheritance. Affected individuals only have neurological manifestations, including psychomotor retardation, cerebellar ataxia, and more variably macrocephaly, or epilepsy. The diagnosis of ...

  8. Word Durations in Non-Native English

    Science.gov (United States)

    Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.

    2010-01-01

    In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172

  9. Dynamic spatial organization of the occipito-temporal word form area for second language processing.

    Science.gov (United States)

    Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li

    2017-08-01

    Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017

  10. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  11. L1 and L2 Picture Naming in Mandarin-English Bilinguals: A Test of Bilingual Dual Coding Theory

    Science.gov (United States)

    Jared, Debra; Poh, Rebecca Pei Yun; Paivio, Allan

    2013-01-01

    This study examined the nature of bilinguals' conceptual representations and the links from these representations to words in L1 and L2. Specifically, we tested an assumption of the Bilingual Dual Coding Theory that conceptual representations include image representations, and that learning two languages in separate contexts can result in…

  12. Do L1 Reading Achievement and L1 Print Exposure Contribute to the Prediction of L2 Proficiency?

    Science.gov (United States)

    Sparks, Richard L.; Patton, Jon; Ganschow, Leonore; Humbach, Nancy

    2012-01-01

    The study examined whether individual differences in high school first language (L1) reading achievement and print exposure would account for unique variance in second language (L2) written (word decoding, spelling, writing, reading comprehension) and oral (listening/speaking) proficiency after adjusting for the effects of early L1 literacy and…

  13. The determinants of spoken and written picture naming latencies.

    Science.gov (United States)

    Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel

    2002-02-01

    The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.

  14. Word Recognition Subcomponents and Passage Level Reading in a Foreign Language

    Science.gov (United States)

    Yamashita, Junko

    2013-01-01

    Despite the growing number of studies highlighting the complex process of acquiring second language (L2) word recognition skills, comparatively little research has examined the relationship between word recognition and passage-level reading ability in L2 learners; further, the existing results are inconclusive. This study aims to help fill the…

  15. Adapting the Freiburg monosyllabic word test for Slovenian

    Directory of Open Access Journals (Sweden)

    Tatjana Marvin

    2017-12-01

    Full Text Available Speech audiometry is one of the standard methods used to diagnose the type of hearing loss and to assess the communication function of the patient by determining the level of the patient’s ability to understand and repeat words presented to him or her in a hearing test. For this purpose, the Slovenian adaptation of the German tests developed by Hahlbrock (1953, 1960 – the Freiburg Monosyllabic Word Test and the Freiburg Number Test – are used in Slovenia (adapted in 1968 by Pompe. In this paper we focus on the Freiburg Monosyllabic Word Test for Slovenian, which has been criticized by patients as well as in the literature for the unequal difficulty and frequency of the words, with many of these being extremely rare or even obsolete. As part of the patient’s communication function is retrieving the meaning of individual words by guessing, the less frequent and consequently less familiar words do not contribute to reliable testing results. We therefore adapt the test by identifying and removing such words and supplement them with phonetically similar words to preserve the phonetic balance of the list. The words used for replacement are extracted from the written corpus of Slovenian Gigafida and the spoken corpus of Slovenian GOS, while the optimal combinations of words are established by using computational algorithms.

  16. Time course of syllabic and sub-syllabic processing in Mandarin word production: Evidence from the picture-word interference paradigm.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2017-06-05

    The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.

  17. Cognitive, Linguistic and Print-Related Predictors of Preschool Children's Word Spelling and Name Writing

    Science.gov (United States)

    Milburn, Trelani F.; Hipfner-Boucher, Kathleen; Weitzman, Elaine; Greenberg, Janice; Pelletier, Janette; Girolametto, Luigi

    2017-01-01

    Preschool children begin to represent spoken language in print long before receiving formal instruction in spelling and writing. The current study sought to identify the component skills that contribute to preschool children's ability to begin to spell words and write their name. Ninety-five preschool children (mean age = 57 months) completed a…

  18. Gated Word Recognition by Postlingually Deafened Adults with Cochlear Implants: Influence of Semantic Context

    Science.gov (United States)

    Patro, Chhayakanta; Mendel, Lisa Lucks

    2018-01-01

    Purpose: The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. Method: Listeners with CIs as well as those with normal hearing (NH)…

  19. Age of Acquisition and Sensitivity to Gender in Spanish Word Recognition

    Science.gov (United States)

    Foote, Rebecca

    2014-01-01

    Speakers of gender-agreement languages use gender-marked elements of the noun phrase in spoken-word recognition: A congruent marking on a determiner or adjective facilitates the recognition of a subsequent noun, while an incongruent marking inhibits its recognition. However, while monolinguals and early language learners evidence this…

  20. Tracking Eye Movements to Localize Stroop Interference in Naming: Word Planning Versus Articulatory Buffering

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2014-01-01

    Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the

  1. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  2. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    Science.gov (United States)

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  3. L2 speakers decompose morphologically complex verbs: fMRI evidence from priming of transparent derived verbs

    Directory of Open Access Journals (Sweden)

    Sophie eDe Grauwe

    2014-10-01

    Full Text Available In this fMRI long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g. wegleggen ‘put aside’. Many behavioral and some fMRI studies suggest that native (L1 speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG. In non-native (L2 speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: Some studies find more reliance on holistic (i.e. non-decompositional processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g. omvallen ‘fall down’ were preceded by their stem (e.g. vallen ‘fall’ with a lag of 4 to 6 words (‘primed’; the other half (e.g. inslapen ‘fall asleep’ were not (‘unprimed’. L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both ROI analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2 in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically.

  4. Lexical Competition Effects in Aphasia: Deactivation of Lexical Candidates in Spoken Word Processing

    Science.gov (United States)

    Janse, Esther

    2006-01-01

    Research has shown that Broca's and Wernicke's aphasic patients show different impairments in auditory lexical processing. The results of an experiment with form-overlapping primes showed an inhibitory effect of form-overlap for control adults and a weak inhibition trend for Broca's aphasic patients, but a facilitatory effect of form-overlap was…

  5. Deviant ERP Response to Spoken Non-Words among Adolescents Exposed to Cocaine in Utero

    Science.gov (United States)

    Landi, Nicole; Crowley, Michael J.; Wu, Jia; Bailey, Christopher A.; Mayes, Linda C.

    2012-01-01

    Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of…

  6. You had me at "Hello": Rapid extraction of dialect information from spoken words.

    Science.gov (United States)

    Scharinger, Mathias; Monahan, Philip J; Idsardi, William J

    2011-06-15

    Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Roy Reider (1914-1979) selections from his written and spoken words

    International Nuclear Information System (INIS)

    Paxton, H.C.

    1980-01-01

    Comments by Roy Reider on chemical criticality control, the fundamentals of safety, policy and responsibility, on written procedures, profiting from accidents, safety training, early history of criticality safety, requirements for the possible, the value of enlightened challenge, public acceptance of a new risk, and on prophets of doom are presented

  8. The power of the spoken word : Political mobilization and nation-building by Kuyper and Gladstone

    NARCIS (Netherlands)

    Hoekstra, H

    2003-01-01

    This article addresses the question why in the Netherlands it was the orthodox protestants who were able to mobilize the masses and not the political establishments of liberals and enlightened protestants during the latter part of the nineteenth century. The biblical rhetoric of their leader Abraham

  9. Spoken Word Recognition in Children with Autism Spectrum Disorder: The Role of Visual Disengagement

    Science.gov (United States)

    Venker, Courtney E.

    2017-01-01

    Deficits in visual disengagement are one of the earliest emerging differences in infants who are later diagnosed with autism spectrum disorder. Although researchers have speculated that deficits in visual disengagement could have negative effects on the development of children with autism spectrum disorder, we do not know which skills are…

  10. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Acheson, D.J.; Takashima, A.

    2013-01-01

    ulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and

  11. Pupils' Knowledge and Spoken Literary Response beyond Polite Meaningless Words: Studying Yeats's "Easter, 1916"

    Science.gov (United States)

    Gordon, John

    2016-01-01

    This article presents research exploring the knowledge pupils bring to texts introduced to them for literary study, how they share knowledge through talk, and how it is elicited by the teacher in the course of an English lesson. It sets classroom discussion in a context where new examination requirements diminish the relevance of social, cultural…

  12. Orthography-Induced Length Contrasts in the Second Language Phonological Systems of L2 Speakers of English: Evidence from Minimal Pairs.

    Science.gov (United States)

    Bassetti, Bene; Sokolović-Perović, Mirjana; Mairano, Paolo; Cerni, Tania

    2018-06-01

    Research shows that the orthographic forms ("spellings") of second language (L2) words affect speech production in L2 speakers. This study investigated whether English orthographic forms lead L2 speakers to produce English homophonic word pairs as phonological minimal pairs. Targets were 33 orthographic minimal pairs, that is to say homophonic words that would be pronounced as phonological minimal pairs if orthography affects pronunciation. Word pairs contained the same target sound spelled with one letter or two, such as the /n/ in finish and Finnish (both /'fɪnɪʃ/ in Standard British English). To test for effects of length and type of L2 exposure, we compared Italian instructed learners of English, Italian-English late bilinguals with lengthy naturalistic exposure, and English natives. A reading-aloud task revealed that Italian speakers of English L2 produce two English homophonic words as a minimal pair distinguished by different consonant or vowel length, for instance producing the target /'fɪnɪʃ/ with a short [n] or a long [nː] to reflect the number of consonant letters in the spelling of the words finish and Finnish. Similar effects were found on the pronunciation of vowels, for instance in the orthographic pair scene-seen (both /siːn/). Naturalistic exposure did not reduce orthographic effects, as effects were found both in learners and in late bilinguals living in an English-speaking environment. It appears that the orthographic form of L2 words can result in the establishment of a phonological contrast that does not exist in the target language. Results have implications for models of L2 phonological development.

  13. Shrinkage Degree in $L_{2}$ -Rescale Boosting for Regression.

    Science.gov (United States)

    Xu, Lin; Lin, Shaobo; Wang, Yao; Xu, Zongben

    2017-08-01

    L 2 -rescale boosting ( L 2 -RBoosting) is a variant of L 2 -Boosting, which can essentially improve the generalization performance of L 2 -Boosting. The key feature of L 2 -RBoosting lies in introducing a shrinkage degree to rescale the ensemble estimate in each iteration. Thus, the shrinkage degree determines the performance of L 2 -RBoosting. The aim of this paper is to develop a concrete analysis concerning how to determine the shrinkage degree in L 2 -RBoosting. We propose two feasible ways to select the shrinkage degree. The first one is to parameterize the shrinkage degree and the other one is to develop a data-driven approach. After rigorously analyzing the importance of the shrinkage degree in L 2 -RBoosting, we compare the pros and cons of the proposed methods. We find that although these approaches can reach the same learning rates, the structure of the final estimator of the parameterized approach is better, which sometimes yields a better generalization capability when the number of sample is finite. With this, we recommend to parameterize the shrinkage degree of L 2 -RBoosting. We also present an adaptive parameter-selection strategy for shrinkage degree and verify its feasibility through both theoretical analysis and numerical verification. The obtained results enhance the understanding of L 2 -RBoosting and give guidance on how to use it for regression tasks.

  14. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  15. Lexicon Optimization for Dutch Speech Recognition in Spoken Document Retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  16. Lexicon optimization for Dutch speech recognition in spoken document retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.; Dalsgaard, P.; Lindberg, B.; Benner, H.

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  17. Using the Corpus of Spoken Afrikaans to generate an Afrikaans ...

    African Journals Online (AJOL)

    This paper presents two chatbot systems, ALICE and. Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the. Corpus of Spoken Afrikaans (Korpus Gesproke Afrikaans) to retrain the ALICE chatbot system with human ...

  18. Oral and Literate Strategies in Spoken and Written Narratives.

    Science.gov (United States)

    Tannen, Deborah

    1982-01-01

    Discusses comparative analysis of spoken and written versions of a narrative to demonstrate that features which have been identified as characterizing oral discourse are also found in written discourse and that the written short story combines syntactic complexity expected in writing with features which create involvement expected in speaking.…

  19. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    Science.gov (United States)

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  20. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... languages and can be used for the purposes of spoken language identification. Keywords. SLID .... branch of linguistics to study the sound structure of human language. ... countries, work in the area of Indian language identification has not ...... English and speech database has been collected over tele-.

  1. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  2. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    Science.gov (United States)

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  3. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.J.; Swerts, M.G.J.; Theune, M.; Weegels, M.F.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  4. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  5. A memory-based shallow parser for spoken Dutch

    NARCIS (Netherlands)

    Canisius, S.V.M.; van den Bosch, A.; Decadt, B.; Hoste, V.; De Pauw, G.

    2004-01-01

    We describe the development of a Dutch memory-based shallow parser. The availability of large treebanks for Dutch, such as the one provided by the Spoken Dutch Corpus, allows memory-based learners to be trained on examples of shallow parsing taken from the treebank, and act as a shallow parser after

  6. In a Manner of Speaking: Assessing Frequent Spoken Figurative Idioms to Assist ESL/EFL Teachers

    Science.gov (United States)

    Grant, Lynn E.

    2007-01-01

    This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…

  7. Abelian primitive words

    OpenAIRE

    Domaratzki, Michael; Rampersad, Narad

    2011-01-01

    We investigate Abelian primitive words, which are words that are not Abelian powers. We show that unlike classical primitive words, the set of Abelian primitive words is not context-free. We can determine whether a word is Abelian primitive in linear time. Also different from classical primitive words, we find that a word may have more than one Abelian root. We also consider enumeration problems and the relation to the theory of codes. Peer reviewed

  8. CROATIAN ADULT SPOKEN LANGUAGE CORPUS (HrAL

    Directory of Open Access Journals (Sweden)

    Jelena Kuvač Kraljević

    2016-01-01

    Full Text Available Interest in spoken-language corpora has increased over the past two decades leading to the development of new corpora and the discovery of new facets of spoken language. These types of corpora represent the most comprehensive data source about the language of ordinary speakers. Such corpora are based on spontaneous, unscripted speech defined by a variety of styles, registers and dialects. The aim of this paper is to present the Croatian Adult Spoken Language Corpus (HrAL, its structure and its possible applications in different linguistic subfields. HrAL was built by sampling spontaneous conversations among 617 speakers from all Croatian counties, and it comprises more than 250,000 tokens and more than 100,000 types. Data were collected during three time slots: from 2010 to 2012, from 2014 to 2015 and during 2016. HrAL is today available within TalkBank, a large database of spoken-language corpora covering different languages (https://talkbank.org, in the Conversational Analyses corpora within the subsection titled Conversational Banks. Data were transcribed, coded and segmented using the transcription format Codes for Human Analysis of Transcripts (CHAT and the Computerised Language Analysis (CLAN suite of programmes within the TalkBank toolkit. Speech streams were segmented into communication units (C-units based on syntactic criteria. Most transcripts were linked to their source audios. The TalkBank is public free, i.e. all data stored in it can be shared by the wider community in accordance with the basic rules of the TalkBank. HrAL provides information about spoken grammar and lexicon, discourse skills, error production and productivity in general. It may be useful for sociolinguistic research and studies of synchronic language changes in Croatian.

  9. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  10. An interesting case of metabolic dystonia: L-2 hydroxyglutaric aciduria

    Directory of Open Access Journals (Sweden)

    Padma Balaji

    2014-01-01

    Full Text Available L-2-hydroxyglutaric aciduria (L-2-HGA, a neurometabolic disorder caused by mutations in the L-2 hydroxyglutarate dehydrogenase (L-2-HGDH gene, presents with psychomotor retardation, cerebellar ataxia, extrapyramidal symptoms, macrocephaly and seizures. Characteristic magnetic resonance imaging findings include subcortical cerebral white matter abnormalities with T2 hyperintensities of the dentate nucleus, globus pallidus, putamen and caudate nucleus. The diagnosis can be confirmed by elevated urinary L-2 hydroxyglutaric acid and mutational analysis of the L-2-HGDH gene. We report two siblings with dystonia diagnosed by classical neuroimaging findings with elevated urinary 2 hydroxyglutaric acid. Riboflavin therapy has shown promising results in a subset of cases, thus highlighting the importance of making the diagnosis in these patients.

  11. L-2-Hydroxyglutaric aciduria: MRI in seven cases

    Energy Technology Data Exchange (ETDEWEB)

    D`Incerti, L.; Farina, L.; Savoiardo, M. [Dept. of Neuroradiology, Istituto Nazionale Neurologico ``C. Besta``, Milan (Italy); Moroni, I.; Uziel, G. [Dept. of Child Neurology, Istituto Nazionale Neurologico ``C. Besta``, Milan (Italy)

    1998-11-01

    The MRI findings in 7 patients with L-2-Hydroxyglutaric aciduria (L-2-OHG aciduria) are described and compared with previous neuroradiological reports and the only three published pathological cases. Signal abnormalities involved peripheral subcortical white matter, basal ganglia and dentate nuclei. Cerebellar atrophy was present. Although similar appearances may be seen in other metabolic disorders, the distribution of signal abnormalities in L-2-OHG aciduria is highly characteristic and may suggest the correct diagnosis. (orig.) With 5 figs., 2 tabs., 24 refs.

  12. Event-related brain potentials and second language learning: syntactic processing in late L2 learners at different L2 proficiency levels

    NARCIS (Netherlands)

    Hell, J.G. van; Tokowicz, N.

    2010-01-01

    There are several major questions in the literature on late second language (L2) learning and processing. Some of these questions include: Can late L2 learners process an L2 in a native-like way? What is the nature of the differences in L2 processing among L2 learners at different levels of L2

  13. Sensitivity to phonological context in L2 spelling: evidence from Russian ESL speakers

    DEFF Research Database (Denmark)

    Dich, Nadya

    2010-01-01

    The study attempts to investigate factors underlying the development of spellers’ sensitivity to phonological context in English. Native English speakers and Russian speakers of English as a second language (ESL) were tested on their ability to use information about the coda to predict the spelling...... on the information about the coda when spelling vowels in nonwords. In both native and non-native speakers, context sensitivity was predicted by English word spelling; in Russian ESL speakers this relationship was mediated by English proficiency. L1 spelling proficiency did not facilitate L2 context sensitivity...

  14. The Comparative Effects of Comprehensible Input, Output and Corrective Feedback on the Receptive Acquisition of L2 Vocabulary Items

    Directory of Open Access Journals (Sweden)

    Mohammad Nowbakht

    2015-08-01

    Full Text Available In the present study, the comparative effects of comprehensible input, output and corrective feedback on the receptive acquisition of L2 vocabulary items were investigated. Two groups of beginning EFL learners participated in the study. The control group received comprehensible input only, while the experimental group received input and was required to produce written output. They also received corrective feedback on their lexical errors if any. This could result in the production of modified output. The results of the study indicated that (a the group which produced output and received feedback (if necessary outperformed the group which only received input in the post-test, (b within the experimental group, feedback played a greater role in learners’ better performance than output, (c also a positive correlation between the amount of feedback an individual learner received, and his overall performance in the post-test; and also between the amount of feedback given for a specific word and the correct responses given to its related item in the post-test was found.  The findings of this study provide evidence for the role of output production along with receiving corrective feedback in enhancing L2 processing by drawing further L2 learners’ attention to their output which in turn may result in improving their receptive acquisition of L2 words. Furthermore, as the results suggested, feedback made a more contribution to L2 development than output. Keywords: comprehensible input, output, interaction, corrective feedback, modified output, receptive vocabulary acquisition

  15. Importanza della diversità diatopica nell’insegnamento della lingua spagnola come L2

    Directory of Open Access Journals (Sweden)

    Silvia Lafuente

    2013-03-01

    Full Text Available The syncretic cultural identity, in which the Spanish language and its linguistic hegemony are grounded and have reflected its political hegemony, led Spain to take on a predominant role in standardization over the centuries. However, the present situation of the Spanish language is characterized by pluricentrism covering the vast territory in which the language is spoken. This means that a number of cen- ters have set up prestigious standard models providing norms for a country or region. Therefore, a fair enactment of this polycentrism requires national norms, different ways of codification of the Spanish language, answers to geographical and social forms which have split after a common departure and the idea that the varieties of the Spanish language fulfill speakers’ different expressive requirements and help to enhance national identities, in the face of the domination of the peninsular model. This point of view must guide linguistic research, methodology in lexicography, school grammar, translation of foreign languages and especially the teaching of Spanish as L2.

  16. Tone of voice guides word learning in informative referential contexts.

    Science.gov (United States)

    Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C

    2013-06-01

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., "daxen") spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.

  17. Design Issues and Inference in Experimental L2 Research

    Science.gov (United States)

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  18. Discussion: How different can perspectives on L2 development be?

    NARCIS (Netherlands)

    Hulstijn, J.H.

    2015-01-01

    In this article I discuss the contributions to this special issue of Language Learning on orders and sequences in second language (L2) development. Using a list of questions, I attempt to characterize what I see as the strengths, limitations, and unresolved issues in the approaches to L2 development

  19. The Effects of L2 Experience on L3 Perception

    Science.gov (United States)

    Onishi, Hiromi

    2016-01-01

    This study examines the influence of experience with a second language (L2) on the perception of phonological contrasts in a third language (L3). This study contributes to L3 phonology by examining the influence of L2 phonological perception abilities on the perception of an L3 at the beginner level. Participants were native speakers of Korean…

  20. Learning and Teaching L2 Collocations: Insights from Research

    Science.gov (United States)

    Szudarski, Pawel

    2017-01-01

    The aim of this article is to present and summarize the main research findings in the area of learning and teaching second language (L2) collocations. Being a large part of naturally occurring language, collocations and other types of multiword units (e.g., idioms, phrasal verbs, lexical bundles) have been identified as important aspects of L2

  1. On the L2-metric of vortex moduli spaces

    NARCIS (Netherlands)

    Baptista, J.M.

    2011-01-01

    We derive general expressions for the Kähler form of the L2-metric in terms of standard 2-forms on vortex moduli spaces. In the case of abelian vortices in gauged linear sigma-models, this allows us to compute explicitly the Kähler class of the L2-metric. As an application we compute the total

  2. L2 learner age from a contextualised perspective

    Directory of Open Access Journals (Sweden)

    Jelena Mihaljeviđ Djigunoviđ

    2014-01-01

    Full Text Available In this qualitative study the author focuses on age effects on young learners’ L2 development by comparing the L2 learning processes of six young learners in an instructed setting: three who had started learning English as L2 at age 6/7 and three who had started at age 9/10. Both earlier and later young beginners were followed for three years (during their second, third and fourth year of learning English. The participants’ L2 development was measured through their oral output elicited by a two-part speaking task administered each year. Results of the analyses are interpreted taking into account each learners’ individual characteristics (learning ability, attitudes and motivation, self-concept and the characteristics of the context in which they were learning their L2 (attitudes of school staff and parents to early L2 learning, home support, in-class and out-of-class exposure to L2, socio-economic status. The findings show that earlier and later young beginners follow different trajectories in their L2 learning, which reflects different interactions which age enters into with the other variables.

  3. Controller synthesis for L2 behaviors using rational kernel representations

    NARCIS (Netherlands)

    Mutsaers, M.E.C.; Weiland, S.

    2008-01-01

    This paper considers the controller synthesis problem for the class of linear time-invariant L2 behaviors. We introduce classes of LTI L2 systems whose behavior can be represented as the kernel of a rational operator. Given a plant and a controlled system in this class, an algorithm is developed

  4. Slow Epidemic of Lymphogranuloma Venereum L2b Strain

    Science.gov (United States)

    Spaargaren, Joke; Schachter, Julius; Moncada, Jeanne; de Vries, Henry J.C.; Fennema, Han S.A.; Peña, A. Salvador; Coutinho, Roel A.

    2005-01-01

    We traced the Chlamydia trachomatis L2b variant in Amsterdam and San Francisco. All recent lymphogranuloma venereum cases in Amsterdam were caused by the L2b variant. This variant was also present in the 1980s in San Francisco. Thus, the current "outbreak" is most likely a slowly evolving epidemic. PMID:16318741

  5. Speaking L2 in EFL Classes: Performance, Identity and Alterity

    Science.gov (United States)

    Forman, Ross

    2014-01-01

    When teachers and students use L2 in Expanding Circle, Asian EFL classes, what kind of interpersonal roles do they perform, and what does this mean for the development of L2-mediated identity? The notion of alterity, or otherness, is used here to analyse the extent to which identity work occurs in EFL classes located in a Thai university context.…

  6. Processing advantage for emotional words in bilingual speakers.

    Science.gov (United States)

    Ponari, Marta; Rodríguez-Cuadrado, Sara; Vinson, David; Fox, Neil; Costa, Albert; Vigliocco, Gabriella

    2015-10-01

    Effects of emotion on word processing are well established in monolingual speakers. However, studies that have assessed whether affective features of words undergo the same processing in a native and nonnative language have provided mixed results: Studies that have found differences between native language (L1) and second language (L2) processing attributed the difference to the fact that L2 learned late in life would not be processed affectively, because affective associations are established during childhood. Other studies suggest that adult learners show similar effects of emotional features in L1 and L2. Differences in affective processing of L2 words can be linked to age and context of learning, proficiency, language dominance, and degree of similarity between L2 and L1. Here, in a lexical decision task on tightly matched negative, positive, and neutral words, highly proficient English speakers from typologically different L1s showed the same facilitation in processing emotionally valenced words as native English speakers, regardless of their L1, the age of English acquisition, or the frequency and context of English use. (c) 2015 APA, all rights reserved).

  7. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  8. Learning from Expository Text in L2 Reading: Memory for Causal Relations and L2 Reading Proficiency

    Science.gov (United States)

    Hosoda, Masaya

    2017-01-01

    This study explored the relation between second-language (L2) readers' memory for causal relations and their learning outcomes from expository text. Japanese students of English as a foreign language (EFL) with high and low L2 reading proficiency read an expository text. They completed a causal question and a problem-solving test as measures of…

  9. L2 Teaching in the Wild: A Closer Look at Correction and Explanation Practices in Everyday L2 Interaction

    Science.gov (United States)

    Theodorsdottor, Gudrun

    2018-01-01

    This article argues for a reconceptualization of the concept of "corrective feedback" for the investigation of correction practices in everyday second language (L2) interaction ("in the wild"). Expanding the dataset for L2 research as suggested by Firth and Wagner (1997) to include interactions from the wild has consequences…

  10. Recurrent Partial Words

    Directory of Open Access Journals (Sweden)

    Francine Blanchet-Sadri

    2011-08-01

    Full Text Available Partial words are sequences over a finite alphabet that may contain wildcard symbols, called holes, which match or are compatible with all letters; partial words without holes are said to be full words (or simply words. Given an infinite partial word w, the number of distinct full words over the alphabet that are compatible with factors of w of length n, called subwords of w, refers to a measure of complexity of infinite partial words so-called subword complexity. This measure is of particular interest because we can construct partial words with subword complexities not achievable by full words. In this paper, we consider the notion of recurrence over infinite partial words, that is, we study whether all of the finite subwords of a given infinite partial word appear infinitely often, and we establish connections between subword complexity and recurrence in this more general framework.

  11. Spoken language outcomes after hemispherectomy: factoring in etiology.

    Science.gov (United States)

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  12. Unconscious motivation. Part II: Implicit attitudes and L2 achievement

    Directory of Open Access Journals (Sweden)

    Ali H. Al-Hoorie

    2016-12-01

    Full Text Available This paper investigates the attitudinal/motivational predictors of second language (L2 academic achievement. Young adult learners of English as a foreign language (N = 311 completed several self-report measures and the Single-Target Implicit Association Test. Examination of the motivational profiles of high and low achievers revealed that attachment to the L1 community and the ought-to L2 self were negatively associated with achievement, while explicit attitudes toward the L2 course and implicit attitudes toward L2 speakers were positively associated with it. The relationship between implicit attitudes and achievement could not be explained either by social desirability or by other cognitive confounds, and remained significant after controlling for explicit self-report measures. Explicit–implicit congruence also revealed a similar pattern, in that congruent learners were more open to the L2 community and obtained higher achievement. The results also showed that neither the ideal L2 self nor intended effort had any association with actual L2 achievement, and that intended effort was particularly prone to social desirability biases. Implications of these findings are discussed.

  13. Personalized versus Normal Practice of L2 Speaking on Iranian EFL Learners’ Oral Proficiency

    Directory of Open Access Journals (Sweden)

    Ayda Rahmani

    2015-03-01

    Full Text Available Personalized learning is a self-initiated, self-directed or self-prioritized pursuit which gives the learner a degree of choice about the process of learning i.e. what to learn, how to learn and when to learn. Of course personalized learning does not indicate unlimited choice; because, L2 learners will still have targets to be met. However, it provides learners with the opportunity to learn in ways that suit their individual learning styles. The L2 learner should have the opportunity to freely choose a series of activities, already predisposed by the teacher, to improve and develop L2 proficiency. This is because human beings have different ways to learn and process information; and, these different ways of learning are independent of each other. In other words, learning styles and techniques differ across individuals; thus, personalized learning provides L2 learners to freely choose the activities they enjoy the most. So it is a student-centered learning method in which the interests and the preferences of the learner is taken into account.The present study is an investigation of a personalized versus normal practice of L2 proficiency. For this purpose an OPT (Oxford Placement Test was given to a total of 80 Iranian EFL learners. Then, 40 of them who were considered as intermediate learners were selected for the purpose of the study. The participants were randomly divided into two groups i.e. an experimental group and a control group. Both groups were pretested prior to the study. Then, the experimental group received the treatment in the form of personalized learning (games-based learning, songs, music, stories, English tongue twisters and the materials that the subjects were most interested in for ten sessions while the control group received a normal practice of speaking proficiency (based on New Interchange course books. After ten sessions, both groups were post tested. Then the results of the posttests were subjects of statistical analysis

  14. Early access to lexical-level phonological representations of Mandarin word-forms : evidence from auditory N1 habituation

    NARCIS (Netherlands)

    Yue, Jinxing; Alter, Kai; Howard, David; Bastiaanse, Roelien

    2017-01-01

    An auditory habituation design was used to investigate whether lexical-level phonological representations in the brain can be rapidly accessed after the onset of a spoken word. We studied the N1 component of the auditory event-related electrical potential, and measured the amplitude decrements of N1

  15. It's a Mad, Mad Wordle: For a New Take on Text, Try This Fun Word Cloud Generator

    Science.gov (United States)

    Foote, Carolyn

    2009-01-01

    Nation. New. Common. Generation. These are among the most frequently used words spoken by President Barack Obama in his January 2009 inauguration speech as seen in a fascinating visual display called a Wordle. Educators, too, can harness the power of Wordle to enhance learning. Imagine providing students with a whole new perspective on…

  16. Universal Lyndon Words

    OpenAIRE

    Carpi, Arturo; Fici, Gabriele; Holub, Stepan; Oprsal, Jakub; Sciortino, Marinella

    2014-01-01

    A word $w$ over an alphabet $\\Sigma$ is a Lyndon word if there exists an order defined on $\\Sigma$ for which $w$ is lexicographically smaller than all of its conjugates (other than itself). We introduce and study \\emph{universal Lyndon words}, which are words over an $n$-letter alphabet that have length $n!$ and such that all the conjugates are Lyndon words. We show that universal Lyndon words exist for every $n$ and exhibit combinatorial and structural properties of these words. We then defi...

  17. Online multilingual vocabulary system and its application in L2 learning

    Directory of Open Access Journals (Sweden)

    Haruko Miyakoda

    2010-06-01

    Full Text Available In the field of second language teaching, vocabulary has been one of the most neglected areas in the classroom. Although language teachers/ instructors are well aware of the importance of vocabulary, there is not enough time in the classroom to actually “teach” vocabulary. Therefore, we need to find ways to promote autonomous vocabulary learning so that students can make good use of their time outside theclassrooms.In this study, we present an online vocabulary learning system that we have developed. The results obtained from our evaluation experiment indicate that our system is more effective in retaining the meaning of the words compared to the traditional learning method.As an example of applying this system to language learning, we will give a demonstration of a Japanese onomatopoeia dictionary that we are compiling. Onomatopoeia are especially troublesome for learners of the Japanese language. Although they are frequently used in both written and spoken Japanese, they are very difficult to translate to other languages. We demonstrate that by employing our system, learners are better able to understand the meaning and the context of eachlexical item.

  18. A Few Words about Words | Poster

    Science.gov (United States)

    By Ken Michaels, Guest Writer In Shakepeare’s play “Hamlet,” Polonius inquires of the prince, “What do you read, my lord?” Not at all pleased with what he’s reading, Hamlet replies, “Words, words, words.”1 I have previously described the communication model in which a sender encodes a message and then sends it via some channel (or medium) to a receiver, who decodes the message

  19. ASTER L2 Surface Radiance TIR V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The ASTER L2 Surface Radiance TIR is an on-demand product generated using the five thermal infra-red (TIR) Bands (acquired either during the day or night time)...

  20. L2-gain and passivity techniques in nonlinear control

    CERN Document Server

    van der Schaft, Arjan

    2017-01-01

    This standard text gives a unified treatment of passivity and L2-gain theory for nonlinear state space systems, preceded by a compact treatment of classical passivity and small-gain theorems for nonlinear input-output maps. The synthesis between passivity and L2-gain theory is provided by the theory of dissipative systems. Specifically, the small-gain and passivity theorems and their implications for nonlinear stability and stabilization are discussed from this standpoint. The connection between L2-gain and passivity via scattering is detailed. Feedback equivalence to a passive system and resulting stabilization strategies are discussed. The passivity concepts are enriched by a generalised Hamiltonian formalism, emphasising the close relations with physical modeling and control by interconnection, and leading to novel control methodologies going beyond passivity. The potential of L2-gain techniques in nonlinear control, including a theory of all-pass factorizations of nonlinear systems, and of parametrization...

  1. Okeanos Explorer (EX1503L2): Tropical Exploration (Mapping II)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — During EX-15-03L2, multibeam data will be collected 24 hours a day and XBT casts will be conducted at an interval defined by prevailing oceanographic conditions, but...

  2. MRI features in 17 patients with l2 hydroxyglutaric aciduria

    International Nuclear Information System (INIS)

    Fourati, Héla; Ellouze, Emna; Ahmadi, Mourad; Chaari, Dhouha; Kamoun, Fatma; Hsairi, Ines; Triki, Chahnez; Mnif, Zeineb

    2016-01-01

    l-2-Hydroxyglutaric (l-2-HG) aciduria is a rare inherited metabolic disease usually observed in children. Patients present a very slowly progressive deterioration with cerebellar ataxia, mild or severe mental retardation, and various other clinical signs including extrapyramidal and pyramidal symptoms, and seizures Goffette et al. [1]. This leukencephalopathy was first described in 1980 Duran et al. [2]. Brain magnetic resonance imaging (MRI) demonstrates nonspecific subcortical white matter (WM) loss, cerebellar atrophy and changes in dentate nuclei and putamen Steenweg et al. [3]. The diagnosis is highlighted by increased levels of l-2-HG in body fluids such as urine and cerebrospinal fluid. The purpose of this study is to retrospectively describe the brain MRI features in l-2-HG aciduria

  3. ASTER L2 Surface Radiance VNIR and SWIR V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The ASTER L2 Surface Radiance is a multi-file product that contains atmospherically corrected data for both the Visible Near-Infrared (VNIR) and Shortwave Infrared...

  4. Feedback stabilization experiments using l = 2 equilibrium windings in Scyllac

    International Nuclear Information System (INIS)

    Bartsch, R.R.; Cantrell, E.L.; Gribble, R.F.; Freese, K.B.; Handy, L.E.; Kristal, R.; Miller, G.; Quinn, W.E.

    1977-01-01

    The confinement time in the Scyllac Sector Feedback Experiment has been extended with a pre-programmed equilibrium compensation force. This force was produced by driving a current with a flexible waveform in an additional set of l = 2 windings

  5. Neural overlap of L1 and L2 semantic representations in speech: A decoding approach.

    Science.gov (United States)

    Van de Putte, Eowyn; De Baene, Wouter; Brass, Marcel; Duyck, Wouter

    2017-11-15

    Although research has now converged towards a consensus that both languages of a bilingual are represented in at least partly shared systems for language comprehension, it remains unclear whether both languages are represented in the same neural populations for production. We investigated the neural overlap between L1 and L2 semantic representations of translation equivalents using a production task in which the participants had to name pictures in L1 and L2. Using a decoding approach, we tested whether brain activity during the production of individual nouns in one language allowed predicting the production of the same concepts in the other language. Because both languages only share the underlying semantic representation (sensory and lexical overlap was maximally avoided), this would offer very strong evidence for neural overlap in semantic representations of bilinguals. Based on the brain activation for the individual concepts in one language in the bilateral occipito-temporal cortex and the inferior and the middle temporal gyrus, we could accurately predict the equivalent individual concepts in the other language. This indicates that these regions share semantic representations across L1 and L2 word production. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. $L^2$ estimates for the $\\bar \\partial$ operator

    OpenAIRE

    McNeal, Jeffery D.; Varolin, Dror

    2015-01-01

    This is a survey article about $L^2$ estimates for the $\\bar \\partial$ operator. After a review of the basic approach that has come to be called the "Bochner-Kodaira Technique", the focus is on twisted techniques and their applications to estimates for $\\bar \\partial$, to $L^2$ extension theorems, and to other problems in complex analysis and geometry, including invariant metric estimates and the $\\bar \\partial$-Neumann Problem.

  7. On Wilson bases in $L^2(\\mathbb{R}^d)$

    DEFF Research Database (Denmark)

    Bownik, Marcin; Sielemann Jakobsen, Mads; Lemvig, Jakob

    2017-01-01

    A Wilson system is a collection of finite linear combinations of time frequency shifts of a square integrable function. It is well known that, starting from a tight Gabor frame for $L^{2}(\\mathbb{R})$ with redundancy 2, one can construct an orthonormal Wilson basis for $L^2(\\mathbb{R})$ whose...... of redundancy $2^k$, where $k=1, 2, \\hdots, d$. These results generalize most of the known results about the existence of orthonormal Wilson bases....

  8. Applications of Text Analysis Tools for Spoken Response Grading

    Science.gov (United States)

    Crossley, Scott; McNamara, Danielle

    2013-01-01

    This study explores the potential for automated indices related to speech delivery, language use, and topic development to model human judgments of TOEFL speaking proficiency in second language (L2) speech samples. For this study, 244 transcribed TOEFL speech samples taken from 244 L2 learners were analyzed using automated indices taken from…

  9. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  10. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  11. The Plausibility of Tonal Evolution in the Malay Dialect Spoken in Thailand: Evidence from an Acoustic Study

    Directory of Open Access Journals (Sweden)

    Phanintra Teeranon

    2007-12-01

    Full Text Available The F0 values of vowels following voiceless consonants are higher than those of vowels following voiced consonants; high vowels have a higher F0 than low vowels. It has also been found that when high vowels follow voiced consonants, the F0 values decrease. In contrast, low vowels following voiceless consonants show increasing F0 values. In other words, the voicing of initial consonants has been found to counterbalance the intrinsic F0 values of high and low vowels (House and Fairbanks 1953, Lehiste and Peterson 1961, Lehiste 1970, Laver 1994, Teeranon 2006. To test whether these three findings are applicable to a disyllabic language, the F0 values of high and low vowels following voiceless and voiced consonants were studied in a Malay dialect of the Austronesian language family spoken in Pathumthani Province, Thailand. The data was collected from three male informants, aged 30-35. The Praat program was used for acoustic analysis. The findings revealed the influence of the voicing of initial consonants on the F0 of vowels to be greater than that of the influence of vowel height. Evidence from this acoustic study shows the plausibility for the Malay dialect spoken in Pathumthani to become a tonal language by the influence of initial consonants rather by the influence of the high-low vowel dimension.

  12. The role of syllabic structure in French visual word recognition.

    Science.gov (United States)

    Rouibah, A; Taft, M

    2001-03-01

    Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.

  13. A model of the effect of uncertainty on the C elegans L2/L2d decision.

    Directory of Open Access Journals (Sweden)

    Leon Avery

    Full Text Available At the end of the first larval stage, the C elegans larva chooses between two developmental pathways, an L2 committed to reproductive development and an L2d, which has the option of undergoing reproductive development or entering the dauer diapause. I develop a quantitative model of this choice using mathematical tools developed for pricing financial options. The model predicts that the optimal decision must take into account not only the expected potential for reproductive growth, but also the uncertainty in that expected potential. Because the L2d has more flexibility than the L2, it is favored in unpredictable environments. I estimate that the ability to take uncertainty into account may increase reproductive value by as much as 5%, and discuss possible experimental tests for this ability.

  14. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  16. Combinatorics on words Christoffel words and repetitions in words

    CERN Document Server

    Berstel, Jean; Reutenauer, Christophe; Saliola, Franco V

    2008-01-01

    The two parts of this text are based on two series of lectures delivered by Jean Berstel and Christophe Reutenauer in March 2007 at the Centre de Recherches Mathématiques, Montréal, Canada. Part I represents the first modern and comprehensive exposition of the theory of Christoffel words. Part II presents numerous combinatorial and algorithmic aspects of repetition-free words stemming from the work of Axel Thue-a pioneer in the theory of combinatorics on words. A beginner to the theory of combinatorics on words will be motivated by the numerous examples, and the large variety of exercises, which make the book unique at this level of exposition. The clean and streamlined exposition and the extensive bibliography will also be appreciated. After reading this book, beginners should be ready to read modern research papers in this rapidly growing field and contribute their own research to its development. Experienced readers will be interested in the finitary approach to Sturmian words that Christoffel words offe...

  17. On universal partial words

    OpenAIRE

    Chen, Herman Z. Q.; Kitaev, Sergey; Mütze, Torsten; Sun, Brian Y.

    2016-01-01

    A universal word for a finite alphabet $A$ and some integer $n\\geq 1$ is a word over $A$ such that every word in $A^n$ appears exactly once as a subword (cyclically or linearly). It is well-known and easy to prove that universal words exist for any $A$ and $n$. In this work we initiate the systematic study of universal partial words. These are words that in addition to the letters from $A$ may contain an arbitrary number of occurrences of a special `joker' symbol $\\Diamond\

  18. Word 2013 for dummies

    CERN Document Server

    Gookin, Dan

    2013-01-01

    This bestselling guide to Microsoft Word is the first and last word on Word 2013 It's a whole new Word, so jump right into this book and learn how to make the most of it. Bestselling For Dummies author Dan Gookin puts his usual fun and friendly candor back to work to show you how to navigate the new features of Word 2013. Completely in tune with the needs of the beginning user, Gookin explains how to use Word 2013 quickly and efficiently so that you can spend more time working on your projects and less time trying to figure it all out. Walks you through the capabilit

  19. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  20. Textual, Genre and Social Features of Spoken Grammar: A Corpus-Based Approach

    Directory of Open Access Journals (Sweden)

    Carmen Pérez-Llantada

    2009-02-01

    Full Text Available This paper describes a corpus-based approach to teaching and learning spoken grammar for English for Academic Purposes with reference to Bhatia’s (2002 multi-perspective model for discourse analysis: a textual perspective, a genre perspective and a social perspective. From a textual perspective, corpus-informed instruction helps students identify grammar items through statistical frequencies, collocational patterns, context-sensitive meanings and discoursal uses of words. From a genre perspective, corpus observation provides students with exposure to recurrent lexico-grammatical patterns across different academic text types (genres. From a social perspective, corpus models can be used to raise learners’ awareness of how speakers’ different discourse roles, discourse privileges and power statuses are enacted in their grammar choices. The paper describes corpus-based instructional procedures, gives samples of learners’ linguistic output, and provides comments on the students’ response to this method of instruction. Data resulting from the assessment process and student production suggest that corpus-informed instruction grounded in Bhatia’s multi-perspective model can constitute a pedagogical approach in order to i obtain positive student responses from input and authentic samples of grammar use, ii help students identify and understand the textual, genre and social aspects of grammar in real contexts of use, and therefore iii help develop students’ ability to use grammar accurately and appropriately.

  1. Listening in circles. Spoken drama and the architects of sound, 1750-1830.

    Science.gov (United States)

    Tkaczyk, Viktoria

    2014-07-01

    The establishment of the discipline of architectural acoustics is generally attributed to the physicist Wallace Clement Sabine, who developed the formula for reverberation time around 1900, and with it the possibility of making calculated prognoses about the acoustic potential of a particular design. If, however, we shift the perspective from the history of this discipline to the history of architectural knowledge and praxis, it becomes apparent that the topos of 'good sound' had already entered the discourse much earlier. This paper traces the Europe-wide discussion on theatre architecture between 1750 and 1830. It will be shown that the period of investigation is marked by an increasing interest in auditorium acoustics, one linked to the emergence of a bourgeois theatre culture and the growing socio-political importance of the spoken word. In the wake of this development the search among architects for new methods of acoustic research started to differ fundamentally from an analogical reasoning on the nature of sound propagation and reflection, which in part dated back to antiquity. Through their attempts to find new ways of visualising the behaviour of sound in enclosed spaces and to rethink both the materiality and the mediality of theatre auditoria, architects helped pave the way for the establishment of architectural acoustics as an academic discipline around 1900.

  2. For US Students, L2 Reading Comprehension Is Hard Because L2 Listening Comprehension Is Hard, Too

    Science.gov (United States)

    Sparks, Richard; Patton, Jon; Luebbers, Julie

    2018-01-01

    The Simple View of Reading (SVR) model posits that reading is the product of word decoding and language comprehension and that oral language (listening) comprehension is the best predictor of reading comprehension once word-decoding skill has been established. The SVR model also proposes that there are good readers and three types of poor…

  3. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language

    OpenAIRE

    Claudia Repetto; Elisa Pedroli; Manuela Macedonia; Manuela Macedonia

    2017-01-01

    Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word’s meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of t...

  4. Support for an auto-associative model of spoken cued recall: evidence from fMRI.

    Science.gov (United States)

    de Zubicaray, Greig; McMahon, Katie; Eastburn, Mathew; Pringle, Alan J; Lorenz, Lina; Humphreys, Michael S

    2007-03-02

    Cued recall and item recognition are considered the standard episodic memory retrieval tasks. However, only the neural correlates of the latter have been studied in detail with fMRI. Using an event-related fMRI experimental design that permits spoken responses, we tested hypotheses from an auto-associative model of cued recall and item recognition [Chappell, M., & Humphreys, M. S. (1994). An auto-associative neural network for sparse representations: Analysis and application to models of recognition and cued recall. Psychological Review, 101, 103-128]. In brief, the model assumes that cues elicit a network of phonological short term memory (STM) and semantic long term memory (LTM) representations distributed throughout the neocortex as patterns of sparse activations. This information is transferred to the hippocampus which converges upon the item closest to a stored pattern and outputs a response. Word pairs were learned from a study list, with one member of the pair serving as the cue at test. Unstudied words were also intermingled at test in order to provide an analogue of yes/no recognition tasks. Compared to incorrectly rejected studied items (misses) and correctly rejected (CR) unstudied items, correctly recalled items (hits) elicited increased responses in the left hippocampus and neocortical regions including the left inferior prefrontal cortex (LIPC), left mid lateral temporal cortex and inferior parietal cortex, consistent with predictions from the model. This network was very similar to that observed in yes/no recognition studies, supporting proposals that cued recall and item recognition involve common rather than separate mechanisms.

  5. Association of Affect with Vertical Position in L1 but not in L2 in Unbalanced Bilinguals

    Directory of Open Access Journals (Sweden)

    Degao eLi

    2015-05-01

    Full Text Available After judging the valence of the positive (e.g., happy and the negative words (e.g., sad, the participants’ response to the letter (q or p was faster and slower, respectively, when the letter appeared at the upper end than at the lower end of the screen in Meier & Robinson’ (2004 second experiment. To compare this metaphorical association of affect with vertical position in Chinese-English bilinguals’ first language (L1 and second language (L2 (language, we conducted four experiments in an affective priming task. The targets were one set of positive or negative words (valence, which were shown vertically above or below the centre of the screen (position. The primes, presented at the centre of the screen, were affective words that were semantically related to the targets, affective words that were not semantically related to the targets, affective icon-pictures, and neutral strings in experiment 1, 2, 3, and 4, respectively. In judging the targets’ valence, the participants showed different patterns of interactions between language, valence, and position in reaction times across the experiments. We concluded that metaphorical association between affect and vertical position works in L1 but not in L2 for unbalanced bilinguals.

  6. Understanding Medical Words

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Understanding Medical Words Past Issues / Summer 2009 Table of Contents For ... Medicine that teaches you about many of the words related to your health care Do you have ...

  7. A joint model of word segmentation and meaning acquisition through cross-situational learning.

    Science.gov (United States)

    Räsänen, Okko; Rasilo, Heikki

    2015-10-01

    Human infants learn meanings for spoken words in complex interactions with other people, but the exact learning mechanisms are unknown. Among researchers, a widely studied learning mechanism is called cross-situational learning (XSL). In XSL, word meanings are learned when learners accumulate statistical information between spoken words and co-occurring objects or events, allowing the learner to overcome referential uncertainty after having sufficient experience with individually ambiguous scenarios. Existing models in this area have mainly assumed that the learner is capable of segmenting words from speech before grounding them to their referential meaning, while segmentation itself has been treated relatively independently of the meaning acquisition. In this article, we argue that XSL is not just a mechanism for word-to-meaning mapping, but that it provides strong cues for proto-lexical word segmentation. If a learner directly solves the correspondence problem between continuous speech input and the contextual referents being talked about, segmentation of the input into word-like units emerges as a by-product of the learning. We present a theoretical model for joint acquisition of proto-lexical segments and their meanings without assuming a priori knowledge of the language. We also investigate the behavior of the model using a computational implementation, making use of transition probability-based statistical learning. Results from simulations show that the model is not only capable of replicating behavioral data on word learning in artificial languages, but also shows effective learning of word segments and their meanings from continuous speech. Moreover, when augmented with a simple familiarity preference during learning, the model shows a good fit to human behavioral data in XSL tasks. These results support the idea of simultaneous segmentation and meaning acquisition and show that comprehensive models of early word segmentation should take referential word

  8. A gravity model for crustal dynamics (GEM-L2)

    Science.gov (United States)

    Lerch, F. J.; Klosko, S. M.; Patel, G. B.; Wagner, C. A.

    1985-01-01

    The Laser Geodynamics Satellite (Lageos) was the first NASA satellite which was placed into orbit exclusively for laser ranging applications. Lageos was designed to permit extremely accurate measurements of the earth's rotation and the movement of the tectonic plates. The Goddard earth model, GEM-L2, was derived mainly on the basis of the precise laser ranging data taken on many satellites. Douglas et al. (1984) have demonstrated the utility of GEM-L2 in detecting the broadest ocean circulations. As Lageos data constitute the most extensive set of satellite laser observations ever collected, the incorporation of 2-1/2 years of these data into the Goddard earth models (GEM) has substantially advanced the geodynamical objectives. The present paper discusses the products of the GEM-L2 solution.

  9. Review of Social Interaction and L2 Classroom Discourse

    DEFF Research Database (Denmark)

    aus der Wieschen, Maria Vanessa

    2015-01-01

    Social Interaction and L2 Classroom Discourse investigates interactional practices in L2 classrooms. Using Conversation Analysis, the book unveils the processes underlying the co-construction of mutual understanding in potential interactional troubles in L2 classrooms – such as claims...... taster sessions over foreign language classrooms in monolingual contexts to English as an Additional Language settings in a multilingual context. This variety of settings allows him to examine a range of verbal and non-verbal features of classroom interaction, for example how code-switching is used......-6), and application (Chapters 7 and 8). A central focus throughout the entire book is classroom interactional competence and its influence on language learning....

  10. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language.

    Science.gov (United States)

    Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela

    2017-01-01

    Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word's meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training.

  11. Fast mapping of novel word forms traced neurophysiologically

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2011-11-01

    Full Text Available Human capacity to quickly learn new words, critical for our ability to communicate using language, is well-known from behavioural studies and observations, but its neural underpinnings remain unclear. In this study, we have used event-related potentials to record brain activity to novel spoken word forms as they are being learnt by the human nervous system through passive auditory exposure. We found that the brain response dynamics change dramatically within the short (20 min exposure session: as the subjects become familiarised with the novel word forms, the early (~100 ms fronto-central activity they elicit increases in magnitude and becomes similar to that of known real words. At the same time, acoustically similar real words used as control stimuli show a relatively stable response throughout the recording session; these differences between the stimulus groups are confirmed using both factorial and linear regression analyses. Furthermore, acoustically matched novel non-speech stimuli do not demonstrate similar response increase, suggesting neural specificity of this rapid learning phenomenon to linguistic stimuli. Left-lateralised perisylvian cortical networks appear to be underlying such fast mapping of novel word forms unto the brain’s mental lexicon.

  12. Studies of the magnetic configuration of an l=2 stellarator

    International Nuclear Information System (INIS)

    Fedyanin, O.I.

    1975-05-01

    The first part of this report describes a computational study of the effect of first and second order resonances on an l = 2 stellarator, taking as model the PROTO-CLEO experiment. The magnetic surfaces are computed in each case and the break up shown. The second part of the report deals with measurements made with an electron beam on the PROTO-CLEO l = 2 stellarator. The magnetic surfaces are measured by means of a movable probe which intercepts the beams. It is shown that the form of the surfaces, particularly near the separatrix, is sensitive to quite small perturbations of a resonant type. (author)

  13. L(2,1)-labelling of Circular-arc Graph

    OpenAIRE

    Paul, Satyabrata; Pal, Madhumangal; Pal, Anita

    2014-01-01

    An L(2,1)-labelling of a graph $G=(V, E)$ is $\\lambda_{2,1}(G)$ a function $f$ from the vertex set V (G) to the set of non-negative integers such that adjacent vertices get numbers at least two apart, and vertices at distance two get distinct numbers. The L(2,1)-labelling number denoted by $\\lambda_{2,1}(G)$ of $G$ is the minimum range of labels over all such labelling. In this article, it is shown that, for a circular-arc graph $G$, the upper bound of $\\lambda_{2,1}(G)$ is $\\Delta+3\\omega$, ...

  14. A L2HGDH initiator methionine codon mutation in a Yorkshire terrier with L-2-hydroxyglutaric aciduria

    Directory of Open Access Journals (Sweden)

    Farias Fabiana HG

    2012-07-01

    Full Text Available Abstract Background L-2-hydroxyglutaric aciduria is a metabolic repair deficiency characterized by elevated levels of L-2-hydroxyglutaric acid in urine, blood and cerebrospinal fluid. Neurological signs associated with the disease in humans and dogs include seizures, ataxia and dementia. Case presentation Here we describe an 8 month old Yorkshire terrier that presented with episodes of hyperactivity and aggressive behavior. Between episodes, the dog’s behavior and neurologic examinations were normal. A T2 weighted MRI of the brain showed diffuse grey matter hyperintensity and a urine metabolite screen showed elevated 2-hydroxyglutaric acid. We sequenced all 10 exons and intron-exon borders of L2HGDH from the affected dog and identified a homozygous A to G transition in the initiator methionine codon. The first inframe methionine is at p.M183 which is past the mitochondrial targeting domain of the protein. Initiation of translation at p.M183 would encode an N-terminal truncated protein unlikely to be functional. Conclusions We have identified a mutation in the initiation codon of L2HGDH that is likely to result in a non-functional gene. The Yorkshire terrier could serve as an animal model to understand the pathogenesis of L-2-hydroxyglutaric aciduria and to evaluate potential therapies.

  15. Finding the key to successful L2 learning in groups and individuals

    Directory of Open Access Journals (Sweden)

    Wander Lowie

    2017-03-01

    Full Text Available A large body studies into individual differences in second language learning has shown that success in second language learning is strongly affected by a set of relevant learner characteristics ranging from the age of onset to motivation, aptitude, and personality. Most studies have concentrated on a limited number of learner characteristics and have argued for the relative importance of some of these factors. Clearly, some learners are more successful than others, and it is tempting to try to find the factor or combination of factors that can crack the code to success. However, isolating one or several global individual characteristics can only give a partial explanation of success in second language learning. The limitation of this approach is that it only reflects on rather general personality characteristics of learners at one point in time, while both language development and the factors affecting it are instances of complex dynamic processes that develop over time. Factors that have been labelled as “individual differences” as well as the development of proficiency are characterized by nonlinear relationships in the time domain, due to which the rate of success cannot be simply deduced from a combination of factors. Moreover, in complex dynamic systems theory (CDST literature it has been argued that a generalization about the interaction of variables across individuals is not warranted when we acknowledge that language development is essentially an individual process (Molenaar, 2015. In this paper, the viability of these generalizations is investigated by exploring the L2 development over time for two identical twins in Taiwan who can be expected to be highly similar in all respects, from their environment to their level of English proficiency, to their exposure to English, and to their individual differences. In spite of the striking similarities between these learners, the development of their L2 English over time was very different

  16. Could a multimodal dictionary serve as a learning tool? An examination of the impact of technologically enhanced visual glosses on L2 text comprehension

    Directory of Open Access Journals (Sweden)

    Takeshi Sato

    2016-09-01

    Full Text Available This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it facilitates incidental word retention. This study explores other potentials of multimodal L2 vocabulary learning: explicit learning with a multimodal dictionary could enhance not only word retention, but also text comprehension; the dictionary could serve not only as a reference tool, but also as a learning tool; and technology-enhanced visual glosses could facilitate deeper text comprehension. To verify these claims, this study investigates the multimodal representations’ effects on Japanese students learning L2 locative prepositions by developing two online dictionaries, one with static pictures and one with animations. The findings show the advantage of such dictionaries in explicit learning; however, no significant differences are found between the two types of visual glosses, either in the vocabulary or in the listening tests. This study confirms the effectiveness of multimodal L2 materials, but also emphasizes the need for further research into making the technologically enhanced materials more effective.

  17. Auditory comprehension: from the voice up to the single word level

    OpenAIRE

    Jones, Anna Barbara

    2016-01-01

    Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’...

  18. WordPress Bible

    CERN Document Server

    Brazell, Aaron

    2010-01-01

    The WordPress Bible provides a complete and thorough guide to the largest self hosted blogging tool. This guide starts by covering the basics of WordPress such as installing and the principles of blogging, marketing and social media interaction, but then quickly ramps the reader up to more intermediate to advanced level topics such as plugins, WordPress Loop, themes and templates, custom fields, caching, security and more. The WordPress Bible is the only complete resource one needs to learning WordPress from beginning to end.

  19. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  20. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...