WorldWideScience

Sample records for monosyllabic spoken word

  1. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    Science.gov (United States)

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  2. [Use of Freiburg monosyllabic test words in the contemporary German language : Currentness of the test words].

    Science.gov (United States)

    Steffens, T

    2016-08-01

    The Freiburg monosyllabic test has a word inventory based on the word frequency in written sources from the 19th century, the distribution of which is not even between the test lists. The median distributions of word frequency ranking in contemporary language of nine test lists deviate significantly from the overall median of all 400 monosyllables. Lists 1, 6, 9, 10, and 17 include significantly more very rarely used words; lists 2, 3, 5, and 15, include significantly more very frequently used words. Compared with the word frequency in the contemporary spoken German language, about 45 % of the test words are practically no longer used. Due to this high proportion of extremely rarely or no longer used words, the word inventory is no longer representative of the contemporary German language-neither for the written, nor for the spoken language. Highly educated persons with a large vocabulary are thereby favored. The reference values for normal-hearing persons should therefore be reevaluated.

  3. Accessing the Spoken Word

    NARCIS (Netherlands)

    Goldman, J.; Renals, S.; Bird, S.; de Jong, Franciska M.G.; Federico, M.; Fleischhauer, C.; Kornbluh, M.; Lamel, L.; Oard, D.W.; Stewart, C.; Wright, R.

    2005-01-01

    Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs.

  4. [Effects of noise competition on monosyllabic and disyllabic word perception in children].

    Science.gov (United States)

    Liu, H H; Liu, S; Li, Y; Zheng, Z P; Jin, X; Li, J; Ren, C C; Zheng, J; Zhang, J; Chen, M; Hao, J S; Yang, Y; Liu, W; Ni, X

    2017-05-07

    Objective: The purpose of the present study was to investigate the effects of noise competition on word perception in normal hearing (NH) children and children with cochlear implantation (CI). Methods: To estimate the contribution of noise competition on speech perception, word perception in speech-shaped noise(SSN)and 4-talker babble noise(BN) with Mandarin Lexical Neighborhood Test were performed in 80 NH children and 89 children with CI. Corrected perception percentages were acquired in each group. Results: Both signal to noise ratio (SNR) and noise type influenced the word perception. In NH group, corrected percentages of disyllabic word perception in SSN were 24.2%, 55.9%, 77.1%, 85.1% and 88.9% at -8, -4, 0, 4 and 8 dB SNR, corresponding corrected percentages of monosyllabic word were 13.9%, 39.5%, 60.1%, 68.8% and 80.1%, respectively. In BN noise, corrected percentages of disyllabic word were 2.4%, 24.3%, 55.6%, 74.3% and 86.2%, corresponding monosyllabic word were 2.3%, 20.8%, 47.2%, 61.1% and 74.8%, respectively. In CI group, corrected percentages of dissyllabic word in SSN and BN at 10 dB SNR were 65.5% and 58.1%, respectively. Corresponding monosyllabic word were 49.0% and 41.0%. For SNR=5 dB, corrected percentages of disyllabic word in SSN and BN were 50.0% and 38.1%, corresponding corrected percentages of monosyllabic word were 40.8% and 25.1%, respectively. Analysis indicated that the masking effect were significantly higher in BN compared with SSN. Conclusions: Noise competition influence word perception performance significantly. In specific, the influence of noise on word perception is bigger in children with CI than in NH children. The masking effect is higher in BN noise when compared with SSN.

  5. Morphological Influences on the Recognition of Monosyllabic Monomorphemic Words

    Science.gov (United States)

    Baayen, R. H.; Feldman, L. B.; Schreuder, R.

    2006-01-01

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…

  6. Learning and Consolidation of Novel Spoken Words

    Science.gov (United States)

    Davis, Matthew H.; Di Betta, Anna Maria; Macdonald, Mark J. E.; Gaskell, Gareth

    2009-01-01

    Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on…

  7. Discreteness and interactivity in spoken word production.

    Science.gov (United States)

    Rapp, B; Goldrick, M

    2000-07-01

    Five theories of spoken word production that differ along the discreteness-interactivity dimension are evaluated. Specifically examined is the role that cascading activation, feedback, seriality, and interaction domains play in accounting for a set of fundamental observations derived from patterns of speech errors produced by normal and brain-damaged individuals. After reviewing the evidence from normal speech errors, case studies of 3 brain-damaged individuals with acquired naming deficits are presented. The patterns these individuals exhibit provide important constraints on theories of spoken naming. With the help of computer simulations of the 5 theories, the authors evaluate the extent to which the error patterns predicted by each theory conform with the empirical facts. The results support a theory of spoken word production that, although interactive, places important restrictions on the extent and locus of interactivity.

  8. Towards Affordable Disclosure of Spoken Word Archives

    NARCIS (Netherlands)

    Ordelman, R.J.F.; Heeren, W.F.L.; Huijbregts, M.A.H.; Hiemstra, D.; Jong, de F.M.G.; Larson, M.; Fernie, K.; Oomen, J.; Cigarran, J.

    2008-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken word archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, the least we want to be a

  9. Spoken Word Recognition Strategy for Tamil Language

    Directory of Open Access Journals (Sweden)

    An. Sigappi

    2012-01-01

    Full Text Available This paper outlines a strategy for recognizing a preferred vocabulary of words spoken in Tamil language. The basic philosophy is to extract the features using mel frequency cepstral coefficients (MFCC from the spoken words that are used as representative features of the speech to create models that aid in recognition. The models chosen for the task are hidden Markov models (HMM and autoassociative neural networks (AANN. The HMM is used to model the temporal nature of speech and the AANNs to capture the distribution of feature vectors in the feature space. The created models provide a way to investigate an unexplored speech recognition arena for the Tamil language. The performance of the strategy is evaluated for a number of test utterances through HMM and AANN and the results project the reliability of HMM for emerging applications in regional languages.

  10. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  11. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  12. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  13. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  14. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    Science.gov (United States)

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  15. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  16. EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations

    Directory of Open Access Journals (Sweden)

    João Mendonça Correia

    2015-02-01

    Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in

  17. Phonological neighbourhood effects in French spoken-word recognition.

    Science.gov (United States)

    Dufour, Sophie; Frauenfelder, Ulrich H

    2010-02-01

    According to activation-based models of spoken-word recognition, words with many and high-frequency phonological neighbours are processed more slowly than words with few and low-frequency phonological neighbours. Although considerable empirical support for inhibitory neighbourhood density effects has accumulated, especially in English, little or nothing is known about the effects of neighbourhood frequency and its interaction with neighbourhood density. In this study we examine both effects first separately and then simultaneously in French lexical decision experiments. As in English, we found that words in dense neighbourhoods are recognized more slowly than words in sparse neighbourhoods. Moreover, we showed that words with higher frequency neighbours are processed more slowly than words with no higher frequency neighbours, but only for words occurring in sparse neighbourhoods. Implications of these results for spoken-word recognition models are discussed.

  18. Specific relations between alphanumeric-naming speed and reading speeds of monosyllabic and multisyllabic words

    NARCIS (Netherlands)

    Van den Bos, K.P.; Zijlstra, B.J H; van den Broeck, W

    The goals of this study are to investigate, at three elementary school grade levels, how word reading speed is related to rapidly naming series of numbers, letters, colors, and pictures, and to general processing speed (measured by nonnaming or visual matching tasks), and also to determine how these

  19. The interaction of meaning and sound in spoken word recognition.

    Science.gov (United States)

    Tyler, L K; Voice, J K; Moss, H E

    2000-06-01

    Models of spoken word recognition vary in the ways in which they capture the relationship between speech input and meaning. Modular accounts prohibit a word's meaning from affecting the computation of its form-based representation, whereas interactive models allow activation at the semantic level to affect phonological processing. We tested these competing hypotheses by manipulating word familiarity and imageability, using lexical decision and repetition tasks. Responses to high-imageability words were significantly faster than those to low-imageability words. Repetition latencies were also analyzed as a function of cohort variables, revealing a significant imageability effect only for words that were members of large cohorts, suggesting that when the mapping from phonology to semantics is difficult, semantic information can help the discrimination process. Thus, these data support interactive models of spoken word recognition.

  20. Heart Rate Responses to Synthesized Affective Spoken Words

    Directory of Open Access Journals (Sweden)

    Mirja Ilves

    2012-01-01

    Full Text Available The present study investigated the effects of brief synthesized spoken words with emotional content on the ratings of emotions and heart rate responses. Twenty participants' heart rate functioning was measured while they listened to a set of emotionally negative, neutral, and positive words produced by speech synthesizers. At the end of the experiment, ratings of emotional experiences were also collected. The results showed that the ratings of the words were in accordance with their valence. Heart rate deceleration was significantly the strongest and most prolonged to the negative stimuli. The findings are the first suggesting that brief spoken emotionally toned words evoke a similar heart rate response pattern found earlier for more sustained emotional stimuli.

  1. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    Science.gov (United States)

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  2. Flow of information in the spoken word recognition system

    NARCIS (Netherlands)

    McQueen, J.M.; Cutler, A.; Norris, D.

    2003-01-01

    Spoken word recognition consists of two major component processes. At the prelexical stage, information in the speech signal is used to generate an abstract description of the utterance which can then be used to access stored lexical knowledge. The lexical stage is characterized by multiple activati

  3. Visual Speech Primes Open-Set Recognition of Spoken Words

    Science.gov (United States)

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2009-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…

  4. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    Science.gov (United States)

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  5. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  6. The Activation of Embedded Words in Spoken Word Recognition.

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions.

  7. Tracking recognition of spoken words by tracking looks to printed words

    NARCIS (Netherlands)

    McQueen, J.M.; Viebahn, M.C.

    2007-01-01

    Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., "Klik op her woord buffel": Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer)

  8. Phonological units in spoken word production: insights from Cantonese.

    Directory of Open Access Journals (Sweden)

    Andus Wing-Kuen Wong

    Full Text Available Evidence from previous psycholinguistic research suggests that phonological units such as phonemes have a privileged role during phonological planning in Dutch and English (aka the segment-retrieval hypothesis. However, the syllable-retrieval hypothesis previously proposed for Mandarin assumes that only the entire syllable unit (without the tone can be prepared in advance in speech planning. Using Cantonese Chinese as a test case, the present study was conducted to investigate whether the syllable-retrieval hypothesis can be applied to other Chinese spoken languages. In four implicit priming (form-preparation experiments, participants were asked to learn various sets of prompt-response di-syllabic word pairs and to utter the corresponding response word upon seeing each prompt. The response words in a block were either phonologically related (homogeneous or unrelated (heterogeneous. Participants' naming responses were significantly faster in the homogeneous than in the heterogeneous conditions when the response words shared the same word-initial syllable (without the tone (Exps.1 and 4 or body (Exps.3 and 4, but not when they shared merely the same word-initial phoneme (Exp.2. Furthermore, the priming effect observed in the syllable-related condition was significantly larger than that in the body-related condition (Exp. 4. Although the observed syllable priming effects and the null effect of word-initial phoneme are consistent with the syllable-retrieval hypothesis, the body-related (sub-syllabic priming effects obtained in this Cantonese study are not. These results suggest that the syllable-retrieval hypothesis is not generalizable to all Chinese spoken languages and that both syllable and sub-syllabic constituents are legitimate planning units in Cantonese speech production.

  9. Brain-based translation: fMRI decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe.

    Science.gov (United States)

    Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene

    2014-01-01

    Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.

  10. Reverse sequencing syllables of spoken words activates primary visual cortex.

    Science.gov (United States)

    Ino, Tadashi; Asada, Tomohiko; Hirose, Syuichi; Ito, Jin; Fukuyama, Hidenao

    2003-10-27

    Using fMRI, we investigated the neural correlates for sequencing the individual syllables of spoken words in reverse order. The comparison of this task to a control task requiring subjects to repeat identical syllables given acoustically revealed the activation of the primary visual cortex. Because one syllable is generally expressed by one kana character (Japanese phonogram), most subjects used a strategy in which the kana character string corresponding to the word was imagined visually and then read mentally in reverse order to perform the task effectively. Such strategy was not used during a control condition. These results suggest that the primary visual cortex plays a role in the generation of an imagined string.

  11. Children reading spoken words: interactions between vocabulary and orthographic expectancy.

    Science.gov (United States)

    Wegener, Signy; Wang, Hua-Chen; de Lissa, Peter; Robidoux, Serje; Nation, Kate; Castles, Anne

    2017-07-12

    There is an established association between children's oral vocabulary and their word reading but its basis is not well understood. Here, we present evidence from eye movements for a novel mechanism underlying this association. Two groups of 18 Grade 4 children received oral vocabulary training on one set of 16 novel words (e.g., 'nesh', 'coib'), but no training on another set. The words were assigned spellings that were either predictable from phonology (e.g., nesh) or unpredictable (e.g., koyb). These were subsequently shown in print, embedded in sentences. Reading times were shorter for orally familiar than unfamiliar items, and for words with predictable than unpredictable spellings but, importantly, there was an interaction between the two: children demonstrated a larger benefit of oral familiarity for predictable than for unpredictable items. These findings indicate that children form initial orthographic expectations about spoken words before first seeing them in print. A video abstract of this article can be viewed at: https://youtu.be/jvpJwpKMM3E. © 2017 John Wiley & Sons Ltd.

  12. Using spoken words to guide open-ended category formation.

    Science.gov (United States)

    Chauhan, Aneesh; Seabra Lopes, Luís

    2011-11-01

    Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.

  13. A cascaded neuro-computational model for spoken word recognition

    Science.gov (United States)

    Hoya, Tetsuya; van Leeuwen, Cees

    2010-03-01

    In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.

  14. The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition.

    Science.gov (United States)

    Scharenborg, Odette; Coumans, Juul M J; van Hout, Roeland

    2017-08-07

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise. The native listeners outperformed the nonnative listeners in all listening conditions. Importantly, however, the effect of noise on the multiple activation process was found to be remarkably similar in native and nonnative listening. The presence of noise increased the set of candidate words considered for recognition in both native and nonnative listening. The results indicate that the observed performance differences between the English and Dutch listeners should not be primarily attributed to a differential effect of noise, but rather to the difference between native and nonnative listening. Additional analyses showed that word-initial information was found to be more important than word-final information during spoken-word recognition. When word-initial information was no longer reliably available word recognition accuracy dropped and word frequency information could no longer be used suggesting that word frequency information is strongly tied to the onset of words and the earliest moments of lexical access. Proficiency and inhibition ability were found to influence nonnative spoken-word recognition in noise, with a higher proficiency in the nonnative language and worse inhibition ability leading to improved recognition performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. "Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson

    Science.gov (United States)

    Xerri, Daniel

    2016-01-01

    Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…

  16. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    Science.gov (United States)

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  17. A Written Word Is Worth a Thousand Spoken Words: The Influence of Spelling on Spoken-Word Production

    Science.gov (United States)

    Burki, Audrey; Spinelli, Elsa; Gaskell, M. Gareth

    2012-01-01

    The present study investigated the role of spelling in phonological variant processing. Participants learned the auditory forms of potential reduced variants of novel French words (e.g., /plur/) and their associations with pictures of novel objects over 4 days. After the fourth day of training, the spelling of each novel word was presented once.…

  18. Developmental differences in the influence of phonological similarity on spoken word processing in Mandarin Chinese.

    Science.gov (United States)

    Malins, Jeffrey G; Gao, Danqi; Tao, Ran; Booth, James R; Shu, Hua; Joanisse, Marc F; Liu, Li; Desroches, Amy S

    2014-11-01

    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N=17; mean age 10; 5) and adults (N=17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition.

  19. Phonological Competition within the Word: Evidence from the Phoneme Similarity Effect in Spoken Production

    Science.gov (United States)

    Cohen-Goldberg, Ariel M.

    2012-01-01

    Theories of spoken production have not specifically addressed whether the phonemes of a word compete with each other for selection during phonological encoding (e.g., whether /t/ competes with /k/ in cat). Spoken production theories were evaluated and found to fall into three classes, theories positing (1) no competition, (2) competition among…

  20. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    Science.gov (United States)

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  1. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    Science.gov (United States)

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  2. Orthographic consistency affects spoken word recognition at different grain-sizes

    DEFF Research Database (Denmark)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous...... studies demonstrated this by manipulating feedback consistency of rhymes. The present lexical decision study, done in English, manipulated the spelling of individual vowels within consistent rhymes. Participants recognized words with consistent rhymes where the vowel has the most typical spelling (e...... and spelling. The theoretical and methodological implications for future research in spoken word recognition are discussed....

  3. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    Science.gov (United States)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  4. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    Science.gov (United States)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  5. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  6. The role of semantic content of spoken words and the effect on serial recall

    OpenAIRE

    Körning-Ljungberg, Jessica

    2008-01-01

    With relevance to auditory alarm design, the aim with this study was to investigate if semantic content of words (Negative, Neutral, Non-words and Action words) and the way words are spoken ("urgent" and "calm") interrupt performance in serial recall when applying a deviant paradigm. Subjective ratings of perceived "Urgency" and "Attention grabbing" were also measured. An interruption in recall was found caused by the words, but no effects were related to the semantic content or to the way th...

  7. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-03-30

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  8. Lexical influences on spoken spondaic word recognition in hearing-impaired patients

    Directory of Open Access Journals (Sweden)

    Annie eMoulin

    2015-12-01

    Full Text Available Top-down contextual influences play a major part in speech understanding, especially in hearing-impaired patients with deteriorated auditory input. Those influences are most obvious in difficult listening situations, such as listening to sentences in noise but can also be observed at the word level under more favorable conditions, as in one of the most commonly used tasks in audiology, i.e., repeating isolated words in silence. This study aimed to explore the role of top-down contextual influences and their dependence on lexical factors and patient-specific factors using standard clinical linguistic material. Spondaic word perception was tested in 160 hearing-impaired patients aged 23 to 88 years with a four-frequency average pure-tone threshold ranging from 21 to 88 dB HL. Sixty spondaic words were randomly presented at a level adjusted to correspond to a speech perception score ranging between 40% and 70% of the performance intensity function obtained using monosyllabic words. Phoneme and whole-word recognition scores were used to calculate two context-influence indices (the j factor and the ratio of word scores to phonemic scores and were correlated with linguistic factors, such as the phonological neighborhood density and several indices of word occurrence frequencies. Contextual influence was greater for spondaic words than in similar studies using monosyllabic words, with an overall j factor of 2.07 (SD=0.5. For both indices, context use decreased with increasing hearing loss once the average hearing loss exceeded 55 dB HL. In right-handed patients, significantly greater context influence was observed for words presented in the right ears than for words presented in the left, especially in patients with many years of education. The correlations between raw word scores (and context influence indices and word occurrence frequencies showed a significant age-dependent effect, with a stronger correlation between perception scores and word

  9. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection.

  10. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  11. Effects of talker, rate, and amplitude variation on recognition memory for spoken words

    OpenAIRE

    Bradlow, Ann R.; Nygaard, Lynne C.; Pisoni, David B.

    1999-01-01

    This study investigated the encoding of the surface form of spoken words using a continuous recognition memory task. The purpose was to compare and contrast three sources of stimulus variability—talker, speaking rate, and overall amplitude—to determine the extent to which each source of variability is retained in episodic memory. In Experiment 1, listeners judged whether each word in a list of spoken words was “old” (had occurred previously in the list) or “new.” Listeners were more accurate ...

  12. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  13. Neural stages of spoken, written, and signed word processing in beginning second language learners.

    Science.gov (United States)

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2013-01-01

    WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

  14. Neural stages of spoken, written, and signed word processing in beginning second language learners

    Science.gov (United States)

    Leonard, Matthew K.; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I.; Halgren, Eric

    2013-01-01

    We combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine how sensory modality, language type, and language proficiency interact during two fundamental stages of word processing: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language. PMID:23847496

  15. Words translated in sentence contexts produce repetition priming in visual word comprehension and spoken word production.

    Science.gov (United States)

    Francis, Wendy S; Camacho, Alejandra; Lara, Carolina

    2014-10-01

    Previous research with words read in context at encoding showed little if any long-term repetition priming. In Experiment 1, 96 Spanish-English bilinguals translated words in isolation or in sentence contexts at encoding. At test, they translated words or named pictures corresponding to words produced at encoding and control words not previously presented. Repetition priming was reliable in all conditions, but priming effects were generally smaller for contextualized than for isolated words. Repetition priming in picture naming indicated priming from production in context. A componential analysis indicated priming from comprehension in context, but only in the less fluent language. Experiment 2 was a replication of Experiment 1 with auditory presentation of the words and sentences to be translated. Repetition priming was reliable in all conditions, but priming effects were again smaller for contextualized than for isolated words. Priming in picture naming indicated priming from production in context, but the componential analysis indicated no detectable priming for auditory comprehension. The results of the two experiments taken together suggest that repetition priming reflects the long-term learning that occurs with comprehension and production exposures to words in the context of natural language.

  16. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  17. Probabilistic Phonotactics as a Cue for Recognizing Spoken Cantonese Words in Speech.

    Science.gov (United States)

    Yip, Michael C W

    2017-02-01

    Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using Cantonese language as a testing case in the present study. A word-spotting task was used in which listeners were instructed to spot any Cantonese word from a series of nonsense sound sequences. We found that it was easier for the native Cantonese listeners to spot the target word in the nonsense sound sequences with high transitional probability phoneme combinations than those with low transitional probability phoneme combinations. These results concluded that native Cantonese listeners did make use of the transitional probability information to recognize the spoken word in speech.

  18. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Segmental and suprasegmental contributions to spoken-word recognition in Dutch

    OpenAIRE

    Koster, M; Cutler, A.

    1997-01-01

    Words can be distinguished by segmental differences or by suprasegmental differences or both. Studies from English suggest that suprasegmentals play little role in human spoken-word recognition; English stress, however, is nearly always unambiguously coded in segmental structure (vowel quality); this relationship is less close in Dutch. The present study directly compared the effects of segmental and suprasegmental mispronunciation on word recognition in Dutch. There was a strong effect of su...

  20. Examining the effects of variation in emotional tone of voice on spoken word recognition.

    Science.gov (United States)

    Krestar, Maura L; McLennan, Conor T

    2013-09-01

    Emotional tone of voice (ETV) is essential for optimal verbal communication. Research has found that the impact of variation in nonlinguistic features of speech on spoken word recognition differs according to a time course. In the current study, we investigated whether intratalker variation in ETV follows the same time course in two long-term repetition priming experiments. We found that intratalker variability in ETVs affected reaction times to spoken words only when processing was relatively slow and difficult, not when processing was relatively fast and easy. These results provide evidence for the use of both abstract and episodic lexical representations for processing within-talker variability in ETV, depending on the time course of spoken word recognition.

  1. Effects of talker, rate, and amplitude variation on recognition memory for spoken words

    Science.gov (United States)

    BRADLOW, ANN R.; NYGAARD, LYNNE C.; PISONI, DAVID B.

    2012-01-01

    This study investigated the encoding of the surface form of spoken words using a continuous recognition memory task. The purpose was to compare and contrast three sources of stimulus variability—talker, speaking rate, and overall amplitude—to determine the extent to which each source of variability is retained in episodic memory. In Experiment 1, listeners judged whether each word in a list of spoken words was “old” (had occurred previously in the list) or “new.” Listeners were more accurate at recognizing a word as old if it was repeated by the same talker and at the same speaking rate; however, there was no recognition advantage for words repeated at the same overall amplitude. In Experiment 2, listeners were first asked to judge whether each word was old or new, as before, and then they had to explicitly judge whether it was repeated by the same talker, at the same rate, or at the same amplitude. On the first task, listeners again showed an advantage in recognition memory for words repeated by the same talker and at same speaking rate, but no advantage occurred for the amplitude condition. However, in all three conditions, listeners were able to explicitly detect whether an old word was repeated by the same talker, at the same rate, or at the same amplitude. These data suggest that although information about all three properties of spoken words is encoded and retained in memory, each source of stimulus variation differs in the extent to which it affects episodic memory for spoken words. PMID:10089756

  2. Recognition of spoken words: semantic effects in lexical access.

    Science.gov (United States)

    Wurm, Lee H; Vakoch, Douglas A; Seaman, Sean R

    2004-01-01

    Until recently most models of word recognition have assumed that semantic auditory naming effects come into play only after the identification of the word in question. What little evidence exists for early semantic effects in word recognition lexical decision has relied primarily on priming manipulations using the lexical decision task, and has used visual stimulus presentation. The current study uses semantics auditory stimulus presentation and multiple experimental tasks, and does not use priming. Response latencies for 100 common nouns were found to speech perception depend on perceptual dimensions identified by Osgood (1969): Evaluation, Potency, and Activity. In addition, the two-way interactions between these word recognition dimensions were significant. All effects were above and beyond the effects of concreteness, word length, frequency, onset phoneme characteristics, stress, and neighborhood density. Results are discussed against evidence from several areas of research suggesting a role of behaviorally important information in perception.

  3. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions.

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R

    2016-10-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition.

  4. Semantic Processing of Out-Of-Vocabulary Words in a Spoken Dialogue System

    CERN Document Server

    Boros, M; Gallwitz, F; Noeth, E; Niemann, H; Boros, Manuela; Aretoulaki, Maria; Gallwitz, Florian; Noeth, Elmar; Niemann, Heinrich

    1997-01-01

    One of the most important causes of failure in spoken dialogue systems is usually neglected: the problem of words that are not covered by the system's vocabulary (out-of-vocabulary or OOV words). In this paper a methodology is described for the detection, classification and processing of OOV words in an automatic train timetable information system. The various extensions that had to be effected on the different modules of the system are reported, resulting in the design of appropriate dialogue strategies, as are encouraging evaluation results on the new versions of the word recogniser and the linguistic processor.

  5. Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder

    Science.gov (United States)

    Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay

    2014-01-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…

  6. Spoken-word recognition in foreign-accented speech by L2 listeners

    NARCIS (Netherlands)

    Weber, A.C.; Broersma, M.E.; Aoyagi, M.

    2011-01-01

    Two cross-modal priming studies investigated the recognition of English words spoken with a foreign accent. Auditory English primes were either typical of a Dutch accent or typical of a Japanese accent in English and were presented to both Dutch and Japanese L2 listeners. Lexical-decision times to s

  7. Neural Correlates of Priming Effects in Children during Spoken Word Processing with Orthographic Demands

    Science.gov (United States)

    Cao, Fan; Khalid, Kainat; Zaveri, Rishi; Bolger, Donald J.; Bitan, Tali; Booth, James R.

    2010-01-01

    Priming effects were examined in 40 children (9-15 years old) using functional magnetic resonance imaging (fMRI). An orthographic judgment task required participants to determine if two sequentially presented spoken words had the same spelling for the rime. Four lexical conditions were designed: similar orthography and phonology (O[superscript…

  8. Cross-modal representation of spoken and written word meaning in left pars triangularis

    NARCIS (Netherlands)

    Liuzzi, Antonietta Gabriella; Bruffaerts, Rose; Peeters, Ronald; Adamczuk, Katarzyna; Keuleers, Emmanuel; De Deyne, Simon; Storms, Gerrit; Dupont, Patrick; Vandenberghe, Rik

    2017-01-01

    The correspondence in meaning extracted from written versus spoken input remains to be fully understood neurobiologically. Here, in a total of 38 subjects, the functional anatomy of cross-modal semantic similarity for concrete words was determined based on a dual criterion: First, a voxelwise

  9. Modeling of phonological encoding in spoken word production: From Germanic languages to Mandarin Chinese and Japanese

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2015-01-01

    It is widely assumed that spoken word production in Germanic languages like Dutch and English involves a parallel activation of phonemic segments and metrical frames in memory, followed by a serial association of segments to the frame, as implemented in the WEAVER++ model (Levelt, Roelofs, & Meyer,

  10. Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder

    Science.gov (United States)

    Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay

    2014-01-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…

  11. Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Maris, E.G.G.

    2014-01-01

    Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced

  12. Attention demands of spoken word planning: a review.

    Science.gov (United States)

    Roelofs, Ardi; Piai, Vitória

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot proceed without paying some form of attention. Here, we review evidence that word planning requires some but not full attention. The evidence comes from chronometric studies of word planning in picture naming and word reading under divided attention conditions. It is generally assumed that the central attention demands of a process are indexed by the extent that the process delays the performance of a concurrent unrelated task. The studies measured the speed and accuracy of linguistic and non-linguistic responding as well as eye gaze durations reflecting the allocation of attention. First, empirical evidence indicates that in several task situations, processes up to and including phonological encoding in word planning delay, or are delayed by, the performance of concurrent unrelated non-linguistic tasks. These findings suggest that word planning requires central attention. Second, empirical evidence indicates that conflicts in word planning may be resolved while concurrently performing an unrelated non-linguistic task, making a task decision, or making a go/no-go decision. These findings suggest that word planning does not require full central attention. We outline a computationally implemented theory of attention and word planning, and describe at various points the outcomes of computer simulations that demonstrate the utility of the theory in accounting for the key findings. Finally, we indicate how attention deficits may contribute to impaired language performance, such as in individuals with specific language impairment.

  13. Spoken word recognition and lexical representation in very young children

    OpenAIRE

    Swingley, D.; Aslin, R.

    2000-01-01

    Although children's knowledge of the sound patterns of words has been a focus of debate for many years, little is known about the lexical representations very young children use in word recognition. In particular, researchers have questioned the degree of specificity encoded in early lexical representations. The current study addressed this issue by presenting 18–23-month-olds with object labels that were either correctly pronounced, or mispronounced. Mispronunciations involved replacement of...

  14. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    Science.gov (United States)

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  15. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script.

    Science.gov (United States)

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  16. Semantic Richness Effects in Spoken Word Recognition: A Lexical Decision and Semantic Categorization Megastudy.

    Science.gov (United States)

    Goh, Winston D; Yap, Melvin J; Lau, Mabel C; Ng, Melvin M R; Tan, Luuan-Chin

    2016-01-01

    A large number of studies have demonstrated that semantic richness dimensions [e.g., number of features, semantic neighborhood density, semantic diversity , concreteness, emotional valence] influence word recognition processes. Some of these richness effects appear to be task-general, while others have been found to vary across tasks. Importantly, almost all of these findings have been found in the visual word recognition literature. To address this gap, we examined the extent to which these semantic richness effects are also found in spoken word recognition, using a megastudy approach that allows for an examination of the relative contribution of the various semantic properties to performance in two tasks: lexical decision, and semantic categorization. The results show that concreteness, valence, and number of features accounted for unique variance in latencies across both tasks in a similar direction-faster responses for spoken words that were concrete, emotionally valenced, and with a high number of features-while arousal, semantic neighborhood density, and semantic diversity did not influence latencies. Implications for spoken word recognition processes are discussed.

  17. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans

    Science.gov (United States)

    Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin

    2011-08-01

    Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.

  18. Decoding Vowels and Consonants in Spoken and Imagined Words Using Electrocorticographic Signals in Humans

    Science.gov (United States)

    Pei, Xiaomei; Barbour, Dennis; Leuthardt, Eric C.; Schalk, Gerwin

    2013-01-01

    Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to “read the mind,” i.e., to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography (ECoG)) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of the vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brainbased communication using imagined speech. PMID:21750369

  19. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, J S H; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2017-08-31

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs. unfamiliar objects) and phonological (L1- vs. L2-like novel words) familiarity. Participants were trained and tested with a 12-hour intervening period that included overnight sleep or daytime awake. Our results showed; i) benefits of sleep to recognition memory that were greater for words with L2-like phonology; ii) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  20. Phonological neighborhood effects in spoken word production: an fMRI study.

    Science.gov (United States)

    Peramunage, Dasun; Blumstein, Sheila E; Myers, Emily B; Goldrick, Matthew; Baese-Berk, Melissa

    2011-03-01

    The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the stimulus set. Behavioral results showed longer voice-onset time for MP target words, replicating earlier behavioral results [Baese-Berk, M., & Goldrick, M. Mechanisms of interaction in speech production. Language and Cognitive Processes, 24, 527-554, 2009]. fMRI results revealed reduced activation for MP words compared to NMP words in a network including left posterior superior temporal gyrus, the supramarginal gyrus, inferior frontal gyrus, and precentral gyrus. These findings support cascade models of spoken word production and show that neural activation at the lexical level modulates activation in those brain regions involved in lexical selection, phonological planning, and, ultimately, motor plans for production. The facilitatory effects for words with MP neighbors suggest that competition effects reflect the overlap inherent in the phonological representation of the target word and its MP neighbor.

  1. Spectrotemporal processing drives fast access to memory traces for spoken words.

    Science.gov (United States)

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Spoken word production: A theory of lexical access

    NARCIS (Netherlands)

    Levelt, W.J.M.

    2001-01-01

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker's focusing on a target concept and ending with the initiation of articulation. The initial s

  3. Vowel devoicing and the perception of spoken Japanese words.

    Science.gov (United States)

    Cutler, Anne; Otake, Takashi; McQueen, James M

    2009-03-01

    Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191-243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, "morning," was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake-possibly arising from nyakusake-than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible.

  4. Vowel devoicing and the perception of spoken Japanese words

    NARCIS (Netherlands)

    Cutler, A.; Otake, T.; McQueen, J.M.

    2009-01-01

    Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191–243], devoicing is po

  5. Integration of Pragmatic and Phonetic Cues in Spoken Word Recognition

    Science.gov (United States)

    Rohde, Hannah; Ettlinger, Marc

    2012-01-01

    Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate…

  6. Lexical effects on spoken-word recognition in children with normal hearing.

    Science.gov (United States)

    Krull, Vidya; Choi, Sangsook; Kirk, Karen Iler; Prusick, Lindsay; French, Brian

    2010-02-01

    words in isolation and in sentences. Word frequency and lexical density seem to influence word recognition independently in children with normal hearing. This is similar to earlier results in adults with normal hearing. In addition, there seems to be an interaction between the two factors, with lexical density being more heavily weighted than word frequency. These results give us further insight into the way children organize and access words from long-term lexical memory in a relational way. Our results showed that lexical effects were most evident at poorer SNRs. This may have important implications for assessing spoken-word recognition performance in children with sensory aids because they typically receive a degraded auditory signal.

  7. The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening

    Science.gov (United States)

    Cibelli, Emily S.; Leonard, Matthew K.; Johnson, Keith; Chang, Edward F.

    2015-01-01

    Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge. PMID:26072003

  8. Conducting spoken word recognition research online: Validation and a new timing method.

    Science.gov (United States)

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  9. Integration of pragmatic and phonetic cues in spoken word recognition.

    Science.gov (United States)

    Rohde, Hannah; Ettlinger, Marc

    2012-07-01

    Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate information from the 2 most disparate domains: pragmatic inference and phonetic perception. Using contexts that trigger pragmatic expectations regarding upcoming coreference (expectations for either he or she), we test listeners' identification of phonetic category boundaries (using acoustically ambiguous words on the /hi/∼/∫i/ continuum). The results indicate that, in addition to phonetic cues, word recognition also reflects pragmatic inference. These findings are consistent with evidence for top-down contextual effects from lexical, syntactic, and semantic cues, but they extend this previous work by testing cues at the pragmatic level and by eliminating a statistical-frequency confound that might otherwise explain the previously reported results. We conclude by exploring the time course of this interaction and discussing how different models of cue integration could be adapted to account for our results. 2012 APA, all rights reserved

  10. [Formula: see text] and [Formula: see text] Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language ([Formula: see text] and second language ([Formula: see text] spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken [Formula: see text] and [Formula: see text] words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their [Formula: see text] and [Formula: see text]. In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed [Formula: see text] and [Formula: see text] words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in [Formula: see text]. Moreover, [Formula: see text] word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  11. Infant perceptual development for faces and spoken words: an integrated approach.

    Science.gov (United States)

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-11-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception.

  12. The self-organization of a spoken word.

    Science.gov (United States)

    Holden, John G; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics - interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant's distributions than the ex-Gaussian or ex-Wald - alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions.

  13. Are Young Children with Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    Science.gov (United States)

    Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…

  14. Form-based priming in spoken word recognition: the roles of competition and bias.

    Science.gov (United States)

    Goldinger, S D; Luce, P A; Pisoni, D B; Marcario, J K

    1992-11-01

    Phonological priming of spoken words refers to improved recognition of targets preceded by primes that share at least one of their constituent phonemes (e.g., BULL-BEER). Phonetic priming refers to reduced recognition of targets preceded by primes that share no phonemes with targets but are phonetically similar to targets (e.g., BULL-VEER). Five experiments were conducted to investigate the role of bias in phonological priming. Performance was compared across conditions of phonological and phonetic priming under a variety of procedural manipulations. Ss in phonological priming conditions systematically modified their responses on unrelated priming trials in perceptual identification, and they were slower and more errorful on unrelated trials in lexical decision than were Ss in phonetic priming conditions. Phonetic and phonological priming effects display different time courses and also different interactions with changes in proportion of related priming trials. Phonological priming involves bias; phonetic priming appears to reflect basic properties of activation and competition in spoken word recognition.

  15. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  16. V2 word order in subordinate clauses in spoken Danish

    DEFF Research Database (Denmark)

    Jensen, Torben Juel; Christensen, Tanya Karoli

    and the type of subordinating conjunction, although social and geographical factors also have an impact. The results are consistent with the hypothesis that V2 word order is associated with foreground or main point of utterance, if we accept it as a statistical tendency in language use rather than....... Nørgaard- Sørensen & L. Schøsler. Grammatikalisering og struktur. København, Museum Tusculanum. Jensen, Torben Juel forthc. Ordstilling i ledsætninger i moderne dansk talesprog. Ny forskning i grammatik. Simons, M. 2007. Observations on embedding verbs, evidentiality and presupposition. Lingua 117 (6......), 1034-1056. Vikner, S. 1995. Verb movement and expletive subjects in the Germanic languages. Oxford University Press....

  17. The Effects of Listener's Familiarity about a Talker on the Free Recall Task of Spoken Words

    Directory of Open Access Journals (Sweden)

    Chikako Oda

    2011-10-01

    Full Text Available Several recent studies have examined an interaction between talker's acoustic characteristics and spoken word recognition in speech perception and have shown that listener's familiarity about a talker influences an easiness of spoken word processing. The present study examined the effect of listener's familiarity about talkers on the free recall task of words spoken by two talkers. Subjects participated in three conditions of the task: the listener has (1 explicit knowledge, (2 implicit knowledge, and (3 no knowledge of the talker. In condition (1, subjects were familiar with talker's voices and were initially informed whose voices they would hear. In condition (2, subjects were familiar with talkers' voices but were not informed whose voices they would hear. In condition (3, subjects were entirely unfamiliar with talker's voices and were not informed whose voices they would hear. We analyzed the percentage of correct answers and compared these results across three conditions. We will discuss the possibility of whether a listener's knowledge about the individual talker's acoustic characteristics stored in long term memory could reduce the quantity of the cognitive resources required in the verbal information processing.

  18. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  19. The acceleration of spoken-word processing in children's native-language acquisition: an ERP cohort study.

    Science.gov (United States)

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko

    2011-04-01

    Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  1. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  2. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  3. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  4. The neural basis of inhibitory effects of semantic and phonological neighbors in spoken word production.

    Science.gov (United States)

    Mirman, Daniel; Graziano, Kristen M

    2013-09-01

    Theories of word production and word recognition generally agree that multiple word candidates are activated during processing. The facilitative and inhibitory effects of these "lexical neighbors" have been studied extensively using behavioral methods and have spurred theoretical development in psycholinguistics, but relatively little is known about the neural basis of these effects and how lesions may affect them. This study used voxel-wise lesion overlap subtraction to examine semantic and phonological neighbor effects in spoken word production following left hemisphere stroke. Increased inhibitory effects of near semantic neighbors were associated with inferior frontal lobe lesions, suggesting impaired selection among strongly activated semantically related candidates. Increased inhibitory effects of phonological neighbors were associated with posterior superior temporal and inferior parietal lobe lesions. In combination with previous studies, these results suggest that such lesions cause phonological-to-lexical feedback to more strongly activate phonologically related lexical candidates. The comparison of semantic and phonological neighbor effects and how they are affected by left hemisphere lesions provides new insights into the cognitive dynamics and neural basis of phonological, semantic, and cognitive control processes in spoken word production.

  5. Interactive Learning of Spoken Words and Their Meanings Through an Audio-Visual Interface

    Science.gov (United States)

    Iwahashi, Naoto

    This paper presents a new interactive learning method for spoken word acquisition through human-machine audio-visual interfaces. During the course of learning, the machine makes a decision about whether an orally input word is a word in the lexicon the machine has learned, using both speech and visual cues. Learning is carried out on-line, incrementally, based on a combination of active and unsupervised learning principles. If the machine judges with a high degree of confidence that its decision is correct, it learns the statistical models of the word and a corresponding image category as its meaning in an unsupervised way. Otherwise, it asks the user a question in an active way. The function used to estimate the degree of confidence is also learned adaptively on-line. Experimental results show that the combination of active and unsupervised learning principles enables the machine and the user to adapt to each other, which makes the learning process more efficient.

  6. Long-term temporal tracking of speech rate affects spoken-word recognition.

    Science.gov (United States)

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  7. An fMRI study of concreteness effects during spoken word recognition in aging. Preservation or attenuation?

    Directory of Open Access Journals (Sweden)

    Tracy eRoxbury

    2016-01-01

    Full Text Available It is unclear whether healthy aging influences concreteness effects (ie. the processing advantage seen for concrete over abstract words and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete versus abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing.

  8. Cross-modal representation of spoken and written word meaning in left pars triangularis.

    Science.gov (United States)

    Liuzzi, Antonietta Gabriella; Bruffaerts, Rose; Peeters, Ronald; Adamczuk, Katarzyna; Keuleers, Emmanuel; De Deyne, Simon; Storms, Gerrit; Dupont, Patrick; Vandenberghe, Rik

    2017-04-15

    The correspondence in meaning extracted from written versus spoken input remains to be fully understood neurobiologically. Here, in a total of 38 subjects, the functional anatomy of cross-modal semantic similarity for concrete words was determined based on a dual criterion: First, a voxelwise univariate analysis had to show significant activation during a semantic task (property verification) performed with written and spoken concrete words compared to the perceptually matched control condition. Second, in an independent dataset, in these clusters, the similarity in fMRI response pattern to two distinct entities, one presented as a written and the other as a spoken word, had to correlate with the similarity in meaning between these entities. The left ventral occipitotemporal transition zone and ventromedial temporal cortex, retrosplenial cortex, pars orbitalis bilaterally, and the left pars triangularis were all activated in the univariate contrast. Only the left pars triangularis showed a cross-modal semantic similarity effect. There was no effect of phonological nor orthographic similarity in this region. The cross-modal semantic similarity effect was confirmed by a secondary analysis in the cytoarchitectonically defined BA45. A semantic similarity effect was also present in the ventral occipital regions but only within the visual modality, and in the anterior superior temporal cortex only within the auditory modality. This study provides direct evidence for the coding of word meaning in BA45 and positions its contribution to semantic processing at the confluence of input-modality specific pathways that code for meaning within the respective input modalities. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Inhibitory processes and spoken word recognition in young and older adults: the interaction of lexical competition and semantic context.

    Science.gov (United States)

    Sommers, M S; Danielson, S M

    1999-09-01

    Two experiments were conducted to examine the importance of inhibitory abilities and semantic context to spoken word recognition in older and young adults. In Experiment 1, identification scores were obtained in 3 contexts: single words, low-predictability sentences, and high-predictability sentences. Additionally, identification performance was examined as a function of neighborhood density (number of items phonetically similar to a target word). Older adults had greater difficulty than young adults recognizing words with many neighbors (hard words). However, older adults also exhibited greater benefits as a result of adding contextual information. Individual differences in inhibitory abilities contributed significantly to recognition performance for lexically hard words but not for lexically easy words. The roles of inhibitory abilities and linguistic knowledge in explaining age-related impairments in spoken word recognition are discussed.

  10. Recognition of spoken words by native and non-native listeners: Talker-, listener-, and item-related factors

    OpenAIRE

    Bradlow, Ann R.; Pisoni, David B.

    1999-01-01

    In order to gain insight into the interplay between the talker-, listener-, and item-related factors that influence speech perception, a large multi-talker database of digitally recorded spoken words was developed, and was then submitted to intelligibility tests with multiple listeners. Ten talkers produced two lists of words at three speaking rates. One list contained lexically “easy” words (words with few phonetically similar sounding “neighbors” with which they could be confused), and the ...

  11. On the locus of morphological effects in spoken-word recognition: before or after lexical identification?

    Science.gov (United States)

    Greber, C; Frauenfelder, U H

    The temporal locus of morphological decomposition in spoken-word recognition was explored in three experiments in which French participants detected the initial CV (LA) or CVC (LAV) in matched monomorphemic pseudosuffixed (lavande) and polymorphemic-suffixed (lavage) carrier words. The proportion of foil trials was increased across experiments (0, 50, or 100%) to delay the moment when participants responded. For the experiment without foils and with the fastest reaction times, a similar pattern of results was obtained for the two types of carrier words. In contrast, an interaction between target type and morphological status of the carrier was obtained when the proportion of foils was higher and the detection latencies were slower. These results point to a late processing locus of morphological decomposition. Copyright 1999 Academic Press.

  12. Spatiotemporal interaction between sound form and meaning during spoken word perception.

    Science.gov (United States)

    Uusvuori, Johanna; Parviainen, Tiina; Inkinen, Marianne; Salmelin, Riitta

    2008-02-01

    Cortical dynamics of spoken word perception is not well understood. The possible interplay between analysis of sound form and meaning, in particular, remains elusive. We used magnetoencephalography to study cortical manifestation of phonological and semantic priming. Ten subjects listened to lists of 4 words. The first 3 words set a semantic or phonological context, and the list-final word was congruent or incongruent with this context. Attenuation of activation by priming during the first 3 words and increase of activation to semantic or phonological mismatch in the list-final word provided converging evidence: The superior temporal cortex bilaterally was involved in both analysis of sound form and meaning but the role of each hemisphere varied over time. Sensitivity to sound form was observed at approximately 100 ms after word onset, followed by sensitivity to semantic aspects from approximately 250 ms onwards, in the left hemisphere. From approximately 450 ms onwards, the picture was changed, with semantic effects now present bilaterally, accompanied by a subtle late effect of sound form in the right hemisphere. Present MEG data provide a detailed spatiotemporal account of neural mechanisms during speech perception that may underlie characterizations obtained with other neuroimaging methods less sensitive in temporal or spatial domain.

  13. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  14. Electrophysiological Correlates of Emotional Content and Volume Level in Spoken Word Processing.

    Science.gov (United States)

    Grass, Annika; Bayer, Mareike; Schacht, Annekathrin

    2016-01-01

    For visual stimuli of emotional content as pictures and written words, stimulus size has been shown to increase emotion effects in the early posterior negativity (EPN), a component of event-related potentials (ERPs) indexing attention allocation during visual sensory encoding. In the present study, we addressed the question whether this enhanced relevance of larger (visual) stimuli might generalize to the auditory domain and whether auditory emotion effects are modulated by volume. Therefore, subjects were listening to spoken words with emotional or neutral content, played at two different volume levels, while ERPs were recorded. Negative emotional content led to an increased frontal positivity and parieto-occipital negativity-a scalp distribution similar to the EPN-between ~370 and 530 ms. Importantly, this emotion-related ERP component was not modulated by differences in volume level, which impacted early auditory processing, as reflected in increased amplitudes of the N1 (80-130 ms) and P2 (130-265 ms) components as hypothesized. However, contrary to effects of stimulus size in the visual domain, volume level did not influence later ERP components. These findings indicate modality-specific and functionally independent processing triggered by emotional content of spoken words and volume level.

  15. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    Directory of Open Access Journals (Sweden)

    Michael Inspector

    Full Text Available OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i All words presented in a set flat monotonous pitch contour (ii Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41, temporal areas (BA 21 22 bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  16. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  17. Investigating the time course of spoken word recognition: electrophysiological evidence for the influences of phonological similarity.

    Science.gov (United States)

    Desroches, Amy S; Newman, Randy Lynn; Joanisse, Marc F

    2009-10-01

    Behavioral and modeling evidence suggests that words compete for recognition during auditory word identification, and that phonological similarity is a driving factor in this competition. The present study used event-related potentials (ERPs) to examine the temporal dynamics of different types of phonological competition (i.e., cohort and rhyme). ERPs were recorded during a novel picture-word matching task, where a target picture was followed by an auditory word that either matched the target (CONE-cone), or mismatched in one of three ways: rhyme (CONE-bone), cohort (CONE-comb), and unrelated (CONE-fox). Rhymes and cohorts differentially modulated two distinct ERP components, the phonological mismatch negativity and the N400, revealing the influences of prelexical and lexical processing components in speech recognition. Cohort mismatches resulted in late increased negativity in the N400, reflecting disambiguation of the later point of miscue and the combined influences of top-down expectations and misleading bottom-up phonological information on processing. In contrast, we observed a reduction in the N400 for rhyme mismatches, reflecting lexical activation of rhyme competitors. Moreover, the observed rhyme effects suggest that there is an interaction between phoneme-level and lexical-level information in the recognition of spoken words. The results support the theory that both levels of information are engaged in parallel during auditory word recognition in a way that permits both bottom-up and top-down competition effects.

  18. Spoken word recognition by Latino children learning Spanish as their first language*

    Science.gov (United States)

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  19. Spoken word recognition by Latino children learning Spanish as their first language.

    Science.gov (United States)

    Hurtado, Nereyda; Marchman, Virginia A; Fernald, Anne

    2007-05-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range= 1 ;3-3; 1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development.

  20. Cross-modal metaphorical mapping of spoken emotion words onto vertical space

    Directory of Open Access Journals (Sweden)

    Pedro R. eMontoro

    2015-08-01

    Full Text Available From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step towards the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily-simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally-valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.

  1. Cross-modal metaphorical mapping of spoken emotion words onto vertical space.

    Science.gov (United States)

    Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.

  2. THE LANGUAGE, TONE AND PROSODY OF EMOTIONS: NEURAL DYNAMICS OF SPOKEN-WORD VALENCE PERCEPTION

    Directory of Open Access Journals (Sweden)

    Einat Liebenthal

    2016-11-01

    Full Text Available Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala – a subcortical center for emotion perception – are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry, in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, appears more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of

  3. Spoken word memory traces within the human auditory cortex revealed by repetition priming and functional magnetic resonance imaging.

    Science.gov (United States)

    Gagnepain, Pierre; Chételat, Gael; Landeau, Brigitte; Dayan, Jacques; Eustache, Francis; Lebreton, Karine

    2008-05-14

    Previous neuroimaging studies in the visual domain have shown that neurons along the perceptual processing pathway retain the physical properties of written words, faces, and objects. The aim of this study was to reveal the existence of similar neuronal properties within the human auditory cortex. Brain activity was measured using functional magnetic resonance imaging during a repetition priming paradigm, with words and pseudowords heard in an acoustically degraded format. Both the amplitude and peak latency of the hemodynamic response (HR) were assessed to determine the nature of the neuronal signature of spoken word priming. A statistically significant stimulus type by repetition interaction was found in various bilateral auditory cortical areas, demonstrating either HR suppression and enhancement for repeated spoken words and pseudowords, respectively, or word-specific repetition suppression without any significant effects for pseudowords. Repetition latency shift only occurred with word-specific repetition suppression in the right middle/posterior superior temporal sulcus. In this region, both repetition suppression and latency shift were related to behavioral priming. Our findings highlight for the first time the existence of long-term spoken word memory traces within the human auditory cortex. The timescale of auditory information integration and the neuronal mechanisms underlying priming both appear to differ according to the level of representations coded by neurons. Repetition may "sharpen" word-nonspecific representations coding short temporal variations, whereas a complex interaction between the activation strength and temporal integration of neuronal activity may occur in neuronal populations coding word-specific representations within longer temporal windows.

  4. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  5. Development of brain networks involved in spoken word processing of Mandarin Chinese

    Science.gov (United States)

    Cao, Fan; Khaild, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J.; Booth, James R.

    2010-01-01

    Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on task. There were developmental increases in left inferior temporal gyrus and right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. PMID:20884355

  6. Spoken Word Recognition and Serial Recall of Words from Components in the Phonological Network

    Science.gov (United States)

    Siew, Cynthia S. Q.; Vitevitch, Michael S.

    2016-01-01

    Network science uses mathematical techniques to study complex systems such as the phonological lexicon (Vitevitch, 2008). The phonological network consists of a "giant component" (the largest connected component of the network) and "lexical islands" (smaller groups of words that are connected to each other, but not to the giant…

  7. "Poetry Is Not a Special Club": How Has an Introduction to the Secondary Discourse of Spoken Word Made Poetry a Memorable Learning Experience for Young People?

    Science.gov (United States)

    Dymoke, Sue

    2017-01-01

    This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…

  8. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  9. On the nature of sonority in spoken word production: evidence from neuropsychology.

    Science.gov (United States)

    Miozzo, Michele; Buchwald, Adam

    2013-09-01

    The concept of sonority - that speech sounds can be placed along a universal sonority scale that affects syllable structure - has proved valuable in accounting for a wide spectrum of linguistic phenomena and psycholinguistic findings. Yet, despite the success of this concept in specifying principles governing sound structure, several questions remain about sonority. One issue that needs clarification concerns its locus in the processes involved in spoken language production, and specifically whether sonority affects the computation of abstract word form representations (phonology), the encoding of context-specific features (phonetics), or both of these processes. This issue was examined in the present study investigating two brain-damaged individuals with impairment arising primarily from deficits affecting phonological and phonetic processes, respectively. Clear effects of sonority on production accuracy were observed in both individuals testing word onsets and codas in word production. These findings indicate that the underlying principles governing sound structure that are captured by the notion of sonority play a role at both phonological and phonetic levels of processing. Furthermore, aspects of the errors recorded from our participants revealed features of syllabic structure proposed under current phonological theories (e.g., articulatory phonology). Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Words Spoken with Insistence: "Wak'as" and the Limits of the Bolivian Multi-Institutional Democracy

    Science.gov (United States)

    Cuelenaere, Laurence Janine

    2009-01-01

    Building on 18 months of fieldwork in the Bolivian highlands, this dissertation examines how traversing landscapes, through the mediation of spatial practices and spoken words, are embedded in systems of belief. By focusing on "wak'as" (i.e. sacred objects) and on how the inhabitants of the Altiplano relate to the Andean deities known as "wak'as,"…

  11. Comments - Error Biases in Spoken Word Planning and Monitoring by Aphasic and Nonaphasic Speakers : Comment on Rapp and Goldrick

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2004-01-01

    Rapp and Goldrick (2000) maintain that the lexical- and mixed-error biases in picture naming by aphasic and nonaphasic speakers argue against models that assume a feedforward-only relationship between lexical items and their sounds in spoken word production. I argue that the newly constructed models

  12. Recognition of spoken words by native and non-native listeners: Talker-, listener-, and item-related factors

    Science.gov (United States)

    Bradlow, Ann R.; Pisoni, David B.

    2012-01-01

    In order to gain insight into the interplay between the talker-, listener-, and item-related factors that influence speech perception, a large multi-talker database of digitally recorded spoken words was developed, and was then submitted to intelligibility tests with multiple listeners. Ten talkers produced two lists of words at three speaking rates. One list contained lexically “easy” words (words with few phonetically similar sounding “neighbors” with which they could be confused), and the other list contained lexically “hard” (wordswords with many phonetically similar sounding “neighbors”). An analysis of the intelligibility data obtained with native speakers of English (experiment 1) showed a strong effect of lexical similarity. Easy words had higher intelligibility scores than hard words. A strong effect of speaking rate was also found whereby slow and medium rate words had higher intelligibility scores than fast rate words. Finally, a relationship was also observed between the various stimulus factors whereby the perceptual difficulties imposed by one factor, such as a hard word spoken at a fast rate, could be overcome by the advantage gained through the listener's experience and familiarity with the speech of a particular talker. In experiment 2, the investigation was extended to another listener population, namely, non-native listeners. Results showed that the ability to take advantage of surface phonetic information, such as a consistent talker across items, is a perceptual skill that transfers easily from first to second language perception. However, non-native listeners had particular difficulty with lexically hard words even when familiarity with the items was controlled, suggesting that non-native word recognition may be compromised when fine phonetic discrimination at the segmental level is required. Taken together, the results of this study provide insight into the signal-dependent and signal-independent factors that influence spoken

  13. Age-related neural reorganization during spoken word recognition: the interaction of form and meaning.

    Science.gov (United States)

    Shafto, Meredith; Randall, Billi; Stamatakis, Emmanuel A; Wright, Paul; Tyler, L K

    2012-06-01

    Research on language and aging typically shows that language comprehension is preserved across the life span. Recent neuroimaging results suggest that this good performance is underpinned by age-related neural reorganization [e.g., Tyler, L. K., Shafto, M. A., Randall, B., Wright, P., Marslen-Wilson, W. D., & Stamatakis, E. A. Preserving syntactic processing across the adult life span: The modulation of the frontotemporal language system in the context of age-related atrophy. Cerebral Cortex, 20, 352-364, 2010]. The current study examines how age-related reorganization affects the balance between component linguistic processes by manipulating semantic and phonological factors during spoken word recognition in younger and older adults. Participants in an fMRI study performed an auditory lexical decision task where words varied in their phonological and semantic properties as measured by degree of phonological competition and imageability. Older adults had a preserved lexicality effect, but compared with younger people, their behavioral sensitivity to phonological competition was reduced, as was competition-related activity in left inferior frontal gyrus. This was accompanied by increases in behavioral sensitivity to imageability and imageability-related activity in left middle temporal gyrus. These results support previous findings that neural compensation underpins preserved comprehension in aging and demonstrate that neural reorganization can affect the balance between semantic and phonological processing.

  14. Engaging Minority Youth in Diabetes Prevention Efforts Through a Participatory, Spoken-Word Social Marketing Campaign.

    Science.gov (United States)

    Rogers, Elizabeth A; Fine, Sarah C; Handley, Margaret A; Davis, Hodari B; Kass, James; Schillinger, Dean

    2017-07-01

    To examine the reach, efficacy, and adoption of The Bigger Picture, a type 2 diabetes (T2DM) social marketing campaign that uses spoken-word public service announcements (PSAs) to teach youth about socioenvironmental conditions influencing T2DM risk. A nonexperimental pilot dissemination evaluation through high school assemblies and a Web-based platform were used. The study took place in San Francisco Bay Area high schools during 2013. In the study, 885 students were sampled from 13 high schools. A 1-hour assembly provided data, poet performances, video PSAs, and Web-based platform information. A Web-based platform featured the campaign Web site and social media. Student surveys preassembly and postassembly (knowledge, attitudes), assembly observations, school demographics, counts of Web-based utilization, and adoption were measured. Descriptive statistics, McNemar's χ(2) test, and mixed modeling accounting for clustering were used to analyze data. The campaign included 23 youth poet-created PSAs. It reached >2400 students (93% self-identified non-white) through school assemblies and has garnered >1,000,000 views of Web-based video PSAs. School participants demonstrated increased short-term knowledge of T2DM as preventable, with risk driven by socioenvironmental factors (34% preassembly identified environmental causes as influencing T2DM risk compared to 83% postassembly), and perceived greater personal salience of T2DM risk reduction (p < .001 for all). The campaign has been adopted by regional public health departments. The Bigger Picture campaign showed its potential for reaching and engaging diverse youth. Campaign messaging is being adopted by stakeholders.

  15. Norms of Emotional Valence, Arousal, Threat Value and Shock Value for 80 Spoken French Words: Comparison Between Neutral and Emotional Tones of Voice

    Directory of Open Access Journals (Sweden)

    Julie Bertels

    2009-01-01

    Full Text Available This paper presents a controlled database of 80 neutral, negative, positive and taboo spoken French words rated by 166 participants on scales for emotional valence, arousal, threat value and shock value. Ratings were provided for each word spoken in a neutral and in an emotionally congruent tone of voice. The data point to the importance of taking into account various emotional dimensions of a stimulus: although strongly correlated, these emotional dimensions cannot be mingled and their impact on emotional evaluation varies according to the emotional category of the word. This also holds true for the influence of the tone of voice in which the words are uttered.

  16. Electrophysiological evidence for the involvement of the approximate number system in preschoolers' processing of spoken number words.

    Science.gov (United States)

    Pinhas, Michal; Donohue, Sarah E; Woldorff, Marty G; Brannon, Elizabeth M

    2014-09-01

    Little is known about the neural underpinnings of number word comprehension in young children. Here we investigated the neural processing of these words during the crucial developmental window in which children learn their meanings and asked whether such processing relies on the Approximate Number System. ERPs were recorded as 3- to 5-year-old children heard the words one, two, three, or six while looking at pictures of 1, 2, 3, or 6 objects. The auditory number word was incongruent with the number of visual objects on half the trials and congruent on the other half. Children's number word comprehension predicted their ERP incongruency effects. Specifically, children with the least number word knowledge did not show any ERP incongruency effects, whereas those with intermediate and high number word knowledge showed an enhanced, negative polarity incongruency response (N(inc)) over centroparietal sites from 200 to 500 msec after the number word onset. This negativity was followed by an enhanced, positive polarity incongruency effect (P(inc)) that emerged bilaterally over parietal sites at about 700 msec. Moreover, children with the most number word knowledge showed ratio dependence in the P(inc) (larger for greater compared with smaller numerical mismatches), a hallmark of the Approximate Number System. Importantly, a similar modulation of the P(inc) from 700 to 800 msec was found in children with intermediate number word knowledge. These results provide the first neural correlates of spoken number word comprehension in preschoolers and are consistent with the view that children map number words onto approximate number representations before they fully master the verbal count list.

  17. Effects of prosody on spoken Thai word perception in pre-attentive brain processing: a pilot study

    Directory of Open Access Journals (Sweden)

    Kittipun Arunphalungsanti

    2016-12-01

    Full Text Available This study aimed to investigate the effect of the unfamiliar stressed prosody on spoken Thai word perception in the pre-attentive processing of the brain evaluated by the N2a and brain wave oscillatory activity. EEG recording was obtained from eleven participants, who were instructed to ignore the sound stimuli while watching silent movies. Results showed that prosody of unfamiliar stress word perception elicited N2a component and the quantitative EEG analysis found that theta and delta wave powers were principally generated in the frontal area. It was possible that the unfamiliar prosody with different frequencies, duration and intensity of the sound of Thai words induced highly selective attention and retrieval of information from the episodic memory of the pre-attentive stage of speech perception. This brain electrical activity evidence could be used for further study in the development of valuable clinical tests to evaluate the frontal lobe function in speech perception.

  18. Memory traces for spoken words in the brain as revealed by the hemodynamic correlate of the mismatch negativity.

    Science.gov (United States)

    Shtyrov, Yury; Osswald, Katja; Pulvermüller, Friedemann

    2008-01-01

    The mismatch negativity response, considered a brain correlate of automatic preattentive auditory processing, is enhanced for word stimuli as compared with acoustically matched pseudowords. This lexical enhancement, taken as a signature of activation of language-specific long-term memory traces, was investigated here using functional magnetic resonance imaging to complement the previous electrophysiological studies. In passive oddball paradigm, word stimuli were randomly presented as rare deviants among frequent pseudowords; the reverse conditions employed infrequent pseudowords among word stimuli. Random-effect analysis indicated clearly distinct patterns for the different lexical types. Whereas the hemodynamic mismatch response was significant for the word deviants, it did not reach significance for the pseudoword conditions. This difference, more pronounced in the left than right hemisphere, was also assessed by analyzing average parameter estimates in regions of interests within both temporal lobes. A significant hemisphere-by-lexicality interaction confirmed stronger blood oxygenation level-dependent mismatch responses to words than pseudowords in the left but not in the right superior temporal cortex. The increased left superior temporal activation and the laterality of cortical sources elicited by spoken words compared with pseudowords may indicate the activation of cortical circuits for lexical material even in passive oddball conditions and suggest involvement of the left superior temporal areas in housing such word-processing neuronal circuits.

  19. Word form Encoding in Chinese Word Naming and Word Typing

    Science.gov (United States)

    Chen, Jenn-Yeu; Li, Cheng-Yi

    2011-01-01

    The process of word form encoding was investigated in primed word naming and word typing with Chinese monosyllabic words. The target words shared or did not share the onset consonants with the prime words. The stimulus onset asynchrony (SOA) was 100 ms or 300 ms. Typing required the participants to enter the phonetic letters of the target word,…

  20. Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately

    NARCIS (Netherlands)

    Reinisch, E.; Jesse, A.; McQueen, J.M.

    2010-01-01

    For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such a

  1. The Influence of the Phonological Neighborhood Clustering Coefficient on Spoken Word Recognition

    Science.gov (United States)

    Chan, Kit Ying; Vitevitch, Michael S.

    2009-01-01

    Clustering coefficient--a measure derived from the new science of networks--refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words "bat", "hat", and "can", all of which are neighbors of the word "cat"; the words "bat" and…

  2. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    Science.gov (United States)

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  3. Key Word Signing: Perceived and Acoustic Differences between Signed and Spoken Narratives.

    Science.gov (United States)

    Windsor, Jennifer; Fristoe, Macalyne

    1991-01-01

    This study examined keyword signing (KWS), a communication approach used with nonspeaking individuals. Acoustic measures and judgments of 20 adult listeners were used to evaluate KWS and Spoken-Only narratives. KWS narratives were produced with a slower articulation rate, because of increased pause and speech segment duration and increased pause…

  4. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  5. Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements

    Science.gov (United States)

    Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.

    2011-01-01

    Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…

  6. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception

    Science.gov (United States)

    Liebenthal, Einat; Silbersweig, David A.; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala—a subcortical center for emotion perception—are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states. PMID

  7. The power of the spoken word: sociolinguistic cues influence the misinformation effect.

    Science.gov (United States)

    Vornik, Lana A; Sharman, Stefanie J; Garry, Maryanne

    2003-01-01

    We investigated whether the sociolinguistic information delivered by spoken, accented postevent narratives would influence the misinformation effect. New Zealand subjects listened to misleading postevent information spoken in either a New Zealand (NZ) or North American (NA) accent. Consistent with earlier research, we found that NA accents were seen as more powerful and more socially attractive. We found that accents per se had no influence on the misinformation effect but sociolinguistic factors did: both power and social attractiveness affected subjects' susceptibility to misleading postevent suggestions. When subjects rated the speaker highly on power, social attractiveness did not matter; they were equally misled. However, when subjects rated the speaker low on power, social attractiveness did matter: subjects who rated the speaker high on social attractiveness were more misled than subjects who rated it lower. There were similar effects for confidence. These results have implications for our understanding of social influences on the misinformation effect.

  8. Spoken Dutch.

    Science.gov (United States)

    Bloomfield, Leonard

    This course in spoken Dutch is intended for use in introductory conversational classes. The book is divided into five major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening comprehension, and (4)…

  9. Spoken Dutch.

    Science.gov (United States)

    Bloomfield, Leonard

    This course in spoken Dutch is intended for use in introductory conversational classes. The book is divided into five major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening comprehension, and (4)…

  10. A connectionist model for the simulation of human spoken-word recognition

    NARCIS (Netherlands)

    Kuijk, D.J. van; Wittenburg, P.; Dijkstra, A.F.J.

    1999-01-01

    A new psycholinguistically motivated and neural network base model of human word recognition is presented. In contrast to earlier models it uses real speech as input. At the word layer acoustical and temporal information is stored by sequences of connected sensory neurons that pass on sensor potenti

  11. Neural Processing of Spoken Words in Specific Language Impairment and Dyslexia

    Science.gov (United States)

    Helenius, Paivi; Parviainen, Tiina; Paetau, Ritva; Salmelin, Riitta

    2009-01-01

    Young adults with a history of specific language impairment (SLI) differ from reading-impaired (dyslexic) individuals in terms of limited vocabulary and poor verbal short-term memory. Phonological short-term memory has been shown to play a significant role in learning new words. We investigated the neural signatures of auditory word recognition…

  12. A connectionist model for the simulation of human spoken-word recognition

    NARCIS (Netherlands)

    Kuijk, D.J. van; Wittenburg, P.; Dijkstra, A.F.J.

    1999-01-01

    A new psycholinguistically motivated and neural network base model of human word recognition is presented. In contrast to earlier models it uses real speech as input. At the word layer acoustical and temporal information is stored by sequences of connected sensory neurons that pass on sensor

  13. The power of the spoken word in life, psychiatry, and psychoanalysis--a contribution to interpersonal psychoanalysis.

    Science.gov (United States)

    Lothane, Zvi

    2007-09-01

    Starting with a 1890 essay by Freud, the author goes in search of an interpersonal psychology native to Freud's psychoanalytic method and to in psychoanalysis and the interpersonal method in psychiatry. This derives from the basic interpersonal nature of the human situation in the lives of individuals and social groups. Psychiatry, the healing of the soul, and psychotherapy, therapy of the soul, are examined from the perspective of the communication model, based on the essential interpersonal function of language and the spoken word: persons addressing speeches to themselves and to others in relations, between family members, others in society, and the professionals who serve them. The communicational model is also applied in examining psychiatric disorders and psychiatric diagnoses, as well as psychodynamic formulas, which leads to a reformulation of the psychoanalytic therapy as a process. A plea is entered to define psychoanalysis as an interpersonal discipline, in analogy to Sullivan's interpersonal psychiatry.

  14. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words.

    Science.gov (United States)

    Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M

    2017-04-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Distinct Patterns of Brain Activity Characterise Lexical Activation and Competition in Spoken Word Production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Jensen, O.; Schoffelen, J.M.; Bonnefond, M.

    2014-01-01

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography

  16. The Interplay between semantic and phonological constraints during spoken-word comprehension

    OpenAIRE

    Soto-Faraco, Salvador; Brunellière, Angèle

    2015-01-01

    This study addresses how top-down predictions driven by phonological and semantic information interact on spokenword comprehension. To do so, we measured event-related potentials to words embedded in sentences that varied in the degree of semantic constraint (high or low) and in regional accent (congruent or incongruent) with respect to the target word pronunciation. The data showed a negative amplitude shift following phonological mismatch (target pronunciation incongruent with respect to se...

  17. Distinct patterns of brain activity characterise lexical activation and competition in spoken word production.

    Directory of Open Access Journals (Sweden)

    Vitória Piai

    Full Text Available According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog with distractor words. The distractor and picture name were semantically related (cat, unrelated (pin, or identical (dog. Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350-650 ms (4-10 Hz in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.

  18. Grasp it loudly! Supporting actions with semantically congruent spoken action words.

    Science.gov (United States)

    Fargier, Raphaël; Ménoret, Mathilde; Boulenger, Véronique; Nazir, Tatjana A; Paulignan, Yves

    2012-01-01

    Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I). Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II). For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action ("grasp" and "put down") or unrelated words (e.g. "butterfly" and "pigeon"). Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action) language and motor structures are significant to the point that language can refine overt motor behaviour.

  19. Grasp it loudly! Supporting actions with semantically congruent spoken action words.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I. Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II. For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action ("grasp" and "put down" or unrelated words (e.g. "butterfly" and "pigeon". Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action language and motor structures are significant to the point that language can refine overt motor behaviour.

  20. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    NARCIS (Netherlands)

    Jesse, A.; McQueen, J.M.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes i

  1. Children show right-lateralized effects of spoken word-form learning

    Science.gov (United States)

    Nora, Anni; Karvonen, Leena; Renvall, Hanna; Parviainen, Tiina; Kim, Jeong-Young; Service, Elisabet; Salmelin, Riitta

    2017-01-01

    It is commonly thought that phonological learning is different in young children compared to adults, possibly due to the speech processing system not yet having reached full native-language specialization. However, the neurocognitive mechanisms of phonological learning in children are poorly understood. We employed magnetoencephalography (MEG) to track cortical correlates of incidental learning of meaningless word forms over two days as 6–8-year-olds overtly repeated them. Native (Finnish) pseudowords were compared with words of foreign sound structure (Korean) to investigate whether the cortical learning effects would be more dependent on previous proficiency in the language rather than maturational factors. Half of the items were encountered four times on the first day and once more on the following day. Incidental learning of these recurring word forms manifested as improved repetition accuracy and a correlated reduction of activation in the right superior temporal cortex, similarly for both languages and on both experimental days, and in contrast to a salient left-hemisphere emphasis previously reported in adults. We propose that children, when learning new word forms in either native or foreign language, are not yet constrained by left-hemispheric segmental processing and established sublexical native-language representations. Instead, they may rely more on supra-segmental contours and prosody. PMID:28158201

  2. Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared. and they manually responded to arrows presented away from (Experiment

  3. Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared. and they manually responded to arrows presented away from (Experiment

  4. Spoken Word Recognition Enhancement Due to Preceding Synchronized Beats Compared to Unsynchronized or Unrhythmic Beats.

    Science.gov (United States)

    Sidiras, Christos; Iliadou, Vasiliki; Nimatoudis, Ioannis; Reichenbach, Tobias; Bamiou, Doris-Eva

    2017-01-01

    The relation between rhythm and language has been investigated over the last decades, with evidence that these share overlapping perceptual mechanisms emerging from several different strands of research. The dynamic Attention Theory posits that neural entrainment to musical rhythm results in synchronized oscillations in attention, enhancing perception of other events occurring at the same rate. In this study, this prediction was tested in 10 year-old children by means of a psychoacoustic speech recognition in babble paradigm. It was hypothesized that rhythm effects evoked via a short isochronous sequence of beats would provide optimal word recognition in babble when beats and word are in sync. We compared speech recognition in babble performance in the presence of isochronous and in sync vs. non-isochronous or out of sync sequence of beats. Results showed that (a) word recognition was the best when rhythm and word were in sync, and (b) the effect was not uniform across syllables and gender of subjects. Our results suggest that pure tone beats affect speech recognition at early levels of sensory or phonemic processing.

  5. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    Directory of Open Access Journals (Sweden)

    Vitoria ePiai

    2013-12-01

    Full Text Available Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI; vocal colour naming while ignoring distractors (Stroop; and manual object discrimination while ignoring spatial position (Simon task. All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus. Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category relative to incongruent (categorically related and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the anterior cingulate cortex, a region that is likely implementing domain

  6. Non-linear processing of a linear speech stream: The influence of morphological structure on the recognition of spoken Arabic words.

    Science.gov (United States)

    Gwilliams, L; Marantz, A

    2015-08-01

    Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Vocabulary learning in a Yorkshire terrier: slow mapping of spoken words.

    Directory of Open Access Journals (Sweden)

    Ulrike Griebel

    Full Text Available Rapid vocabulary learning in children has been attributed to "fast mapping", with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion.

  8. Vocabulary learning in a Yorkshire terrier: slow mapping of spoken words.

    Science.gov (United States)

    Griebel, Ulrike; Oller, D Kimbrough

    2012-01-01

    Rapid vocabulary learning in children has been attributed to "fast mapping", with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico) not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey) with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before) embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion.

  9. Simulating Retrieval from a Highly Clustered Network: Implications for Spoken Word Recognition

    Science.gov (United States)

    Vitevitch, Michael S.; Ercal, Gunes; Adagarla, Bhargav

    2011-01-01

    Network science describes how entities in complex systems interact, and argues that the structure of the network influences processing. Clustering coefficient, C – one measure of network structure – refers to the extent to which neighbors of a node are also neighbors of each other. Previous simulations suggest that networks with low C dissipate information (or disease) to a large portion of the network, whereas in networks with high C information (or disease) tends to be constrained to a smaller portion of the network (Newman, 2003). In the present simulation we examined how C influenced the spread of activation to a specific node, simulating retrieval of a specific lexical item in a phonological network. The results of the network simulation showed that words with lower C had higher activation values (indicating faster or more accurate retrieval from the lexicon) than words with higher C. These results suggest that a simple mechanism for lexical retrieval can account for the observations made in Chan and Vitevitch (2009), and have implications for diffusion dynamics in other fields. PMID:22174705

  10. The effects of context and feedback on age differences in spoken word recognition.

    Science.gov (United States)

    Stine-Morrow, E A; Soederberg Miller, L M; Nevin, J A

    1999-03-01

    We investigated the hypothesis that age differences in speech discrimination would be reduced by enhancing the distinctiveness of the speech processing event in terms of both the context of encoding and the response outcome. Younger and older adults performed an auditory lexical decision task in which the degree of semantic constraint (context) and type of feedback were manipulated. Main effects of age indicated that older adults generally showed lower discriminability (D) and greater bias (B) toward reporting signals to be words. Consistent with the environmental support hypothesis, older adults were differentially facilitated in discriminability by feedback, but only when semantic context was provided. Also, for both younger and older adults, feedback and context each had the effect of reducing bias and facilitating the speed of rejecting nonwords. Contrary to one suggestion in the literature that aging brings an insensitivity to environmental contingency, older adults were at least as capable as the young in taking advantage of feedback to normalize the speech signal so as to increase discriminability and decrease bias.

  11. Research note: exceptional absolute pitch perception for spoken words in an able adult with autism.

    Science.gov (United States)

    Heaton, Pamela; Davis, Robert E; Happé, Francesca G E

    2008-01-01

    Autism is a neurodevelopmental disorder, characterised by deficits in socialisation and communication, with repetitive and stereotyped behaviours [American Psychiatric Association (1994). Diagnostic and statistical manual for mental disorders (4th ed.). Washington, DC: APA]. Whilst intellectual and language impairment is observed in a significant proportion of diagnosed individuals [Gillberg, C., & Coleman, M. (2000). The biology of the autistic syndromes (3rd ed.). London: Mac Keith Press; Klinger, L., Dawson, G., & Renner, P. (2002). Autistic disorder. In E. Masn, & R. Barkley (Eds.), Child pyschopathology (2nd ed., pp. 409-454). New York: Guildford Press], the disorder is also strongly associated with the presence of highly developed, idiosyncratic, or savant skills [Heaton, P., & Wallace, G. (2004) Annotation: The savant syndrome. Journal of Child Psychology and Psychiatry, 45 (5), 899-911]. We tested identification of fundamental pitch frequencies in complex tones, sine tones and words in AC, an intellectually able man with autism and absolute pitch (AP) and a group of healthy controls with self-reported AP. The analysis showed that AC's naming of speech pitch was highly superior in comparison to controls. The results suggest that explicit access to perceptual information in speech is retained to a significantly higher degree in autism.

  12. Children’s Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    Science.gov (United States)

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P.; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2016-01-01

    Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665

  13. Children's Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time.

    Science.gov (United States)

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2015-01-01

    Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language.

  14. The interaction of lexical tone, intonation and semantic context in on-line spoken word recognition: an ERP study on Cantonese Chinese.

    Science.gov (United States)

    Kung, Carmen; Chwilla, Dorothee J; Schriefers, Herbert

    2014-01-01

    In two ERP experiments, we investigate the on-line interplay of lexical tone, intonation and semantic context during spoken word recognition in Cantonese Chinese. Experiment 1 shows that lexical tone and intonation interact immediately. Words with a low lexical tone at the end of questions (with a rising question intonation) lead to a processing conflict. This is reflected in a low accuracy in lexical identification and in a P600 effect compared to the same words at the end of a statement. Experiment 2 shows that a strongly biasing semantic context leads to much better lexical-identification performance for words with a low tone at the end of questions and to a disappearance of the P600 effect. These results support the claim that semantic context plays a major role in disentangling the tonal information from the intonational information, and thus, in resolving the on-line conflict between intonation and tone. However, the ERP data indicate that the introduction of a semantic context does not entirely eliminate on-line processing problems for words at the end of questions. This is revealed by the presence of an N400 effect for words with a low lexical tone and for words with a high-mid lexical tone at the end of questions. The ERP data thus show that, while semantic context helps in the eventual lexical identification, it makes the deviation of the contextually expected lexical tone from the actual acoustic signal more salient. © 2013 Published by Elsevier Ltd.

  15. Parametric merging of MEG and fMRI reveals spatiotemporal differences in cortical processing of spoken words and environmental sounds in background noise.

    Science.gov (United States)

    Renvall, Hanna; Formisano, Elia; Parviainen, Tiina; Bonte, Milene; Vihla, Minna; Salmelin, Riitta

    2012-01-01

    There is an increasing interest to integrate electrophysiological and hemodynamic measures for characterizing spatial and temporal aspects of cortical processing. However, an informative combination of responses that have markedly different sensitivities to the underlying neural activity is not straightforward, especially in complex cognitive tasks. Here, we used parametric stimulus manipulation in magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) recordings on the same subjects, to study effects of noise on processing of spoken words and environmental sounds. The added noise influenced MEG response strengths in the bilateral supratemporal auditory cortex, at different times for the different stimulus types. Specifically for spoken words, the effect of noise on the electrophysiological response was remarkably nonlinear. Therefore, we used the single-subject MEG responses to construct parametrization for fMRI data analysis and obtained notably higher sensitivity than with conventional stimulus-based parametrization. fMRI results showed that partly different temporal areas were involved in noise-sensitive processing of words and environmental sounds. These results indicate that cortical processing of sounds in background noise is stimulus specific in both timing and location and provide a new functionally meaningful platform for combining information obtained with electrophysiological and hemodynamic measures of brain function.

  16. Syllable and Segments Effects in Mandarin Chinese Spoken Word Production%汉语口语产生中音节和音段的促进和抑制效应

    Institute of Scientific and Technical Information of China (English)

    岳源; 张清芳

    2015-01-01

    采用图画-词汇干扰实验范式,通过比较即时命名、延迟命名以及延迟命名与发音抑制任务的结合,考察了汉语口语产生中音节和音段在单词形式编码的不同阶段所产生的效应。与无关条件相比,在包含音韵编码、语音编码和发音阶段的即时命名任务中,音节相关和音段相关条件显著地缩短了图画命名时间,表现出音节和音段促进效应;在包含发音阶段的延迟命名任务中,音节相关和音段相关条件显著地延长了图画命名的时间,表现出音节和音段抑制效应;在包含语音编码和发音阶段的延迟命名和发音抑制结合的任务中,音段相关条件显著地延长了图画命名时间,表现出音段抑制效应。结果表明,音节和音段的促进效应发生在汉语口语词汇产生中的音韵编码阶段,音节和音段的抑制效应可能发生在语音编码或者发音阶段。效果量(Cohen d)的分析表明音节的促进效应强,而音段的促进效应弱,音节是音韵编码过程的合适单元,为合适单元假设提供了支持证据。与音节相比,音段在语音编码和发音阶段的效应量较大,表明音段在运动执行过程中可能起了相对重要的作用,支持了口语产生中词汇表征准备阶段与运动阶段分离的观点。%Speaking involves stages of conceptual preparation, lemma selection, word-form encoding and articulation. Furthermore, process of word-form encoding can be divided into morphological encoding process, phonological encoding process and phonetic encoding. What is the function unit at the stage of word-form encoding remains a controversial issue in speech production theories. The present study investigated syllable and segments effects at the stages of phonological encoding, phonetic encoding, and articulation in Mandarin spoken word production.   Using Picture-Word Interference (PWI) Paradigm, we compared the effects

  17. Time course of syllabic and sub-syllabic processing in Mandarin word production: Evidence from the picture-word interference paradigm.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2017-06-05

    The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.

  18. Representation of spectro-temporal features of spoken words within the P1-N1-P2 and T-complex of the auditory evoked potentials (AEP).

    Science.gov (United States)

    Wagner, Monica; Roychoudhury, Arindam; Campanelli, Luca; Shafer, Valerie L; Martin, Brett; Steinschneider, Mitchell

    2016-02-12

    The purpose of the study was to determine whether P1-N1-P2 and T-complex morphology reflect spectro-temporal features within spoken words that approximate the natural variation of a speaker and whether waveform morphology is reliable at group and individual levels, necessary for probing auditory deficits. The P1-N1-P2 and T-complex to the syllables /pət/ and /sət/ within 70 natural word productions each were examined. EEG was recorded while participants heard nonsense word pairs and performed a syllable identification task to the second word in the pairs. Single trial auditory evoked potentials (AEP) to the first words were analyzed. Results found P1-N1-P2 and T-complex to reflect spectral and temporal feature processing. Also, results identified preliminary benchmarks for single trial response variability for individual subjects for sensory processing between 50 and 600ms. P1-N1-P2 and T-complex, at least at group level, may serve as phenotypic signatures to identify deficits in spectro-temporal feature recognition and to determine area of deficit, the superior temporal plane or lateral superior temporal gyrus.

  19. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Is the time course of lexical activation and competition in spoken word recognition affected by adult aging? An event-related potential (ERP) study.

    Science.gov (United States)

    Hunter, Cynthia R

    2016-10-01

    Adult aging is associated with decreased accuracy for recognizing speech, particularly in noisy backgrounds and for high neighborhood density words, which sound similar to many other words. In the current study, the time course of neighborhood density effects in young and older adults was compared using event-related potentials (ERP) and behavioral responses in a lexical decision task for spoken words and nonwords presented either in quiet or in noise. Target items sounded similar either to many or to few other words (neighborhood density) but were balanced for the frequency of their component sounds (phonotactic probability). Behavioral effects of density were similar across age groups, but the event-related potential effects of density differed as a function of age group. For young adults, density modulated the amplitude of both the N400 and the later P300 or late positive complex (LPC). For older adults, density modulated only the amplitude of the P300/LPC. Thus, spreading activation to the semantics of lexical neighbors, indexed by the N400 density effect, appears to be reduced or delayed in adult aging. In contrast, effects of density on P300/LPC amplitude were present in both age groups, perhaps reflecting attentional allocation to items that resemble few words in the mental lexicon. The results constitute the first evidence that ERP effects of neighborhood density are affected by adult aging. The age difference may reflect either a unitary density effect that is delayed by approximately 150ms in older adults, or multiple processes that are differentially affected by aging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Spoken Korean: Book One.

    Science.gov (United States)

    Lukoff, Fred

    This text is designed for students planning to learn spoken Korean. Ten lessons and two review sections based on cultural experiences commonly shared by Koreans are included in the text. Grouped in series of five lessons, the instructional materials include (1) basic sentences, (2) word study and review of basic sentences, (3) listening…

  2. Spoken Russian; Book Two.

    Science.gov (United States)

    Bloomfield, Leonard; Petrova, Luba

    This course in spoken Russian is intended for use in introductory conversational classes. Book II in the two volume series is divided into three major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening…

  3. Spoken Russian; Book Two.

    Science.gov (United States)

    Bloomfield, Leonard; Petrova, Luba

    This course in spoken Russian is intended for use in introductory conversational classes. Book II in the two volume series is divided into three major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening…

  4. From the Coffee House to the School House: The Promise and Potential of Spoken Word Poetry in School Contexts

    Science.gov (United States)

    Fisher, Maisha T.

    2005-01-01

    In the two high school writing communities described in this article, literacy is strategic, purposeful, and always linked to meaning. The foundation for literate practices in these communities is Freirian in nature. Teachers, in a very serious way, work to liberate language and prepare students to be in control of words; they do this by allowing…

  5. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Science.gov (United States)

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading. PMID:28690507

  6. [Evaluation of hearing aid rehabilitation using the Freiburg Monosyllabic Test].

    Science.gov (United States)

    Hoppe, U

    2016-08-01

    The benefit of hearing aids is not always directly subjectively perceivable. Therefore, objective and quantifiable speech audiometric measurements are required. Beside acoustic gain measurements and structured interviews, speech audiometry in quiet and in noise is one of the three pillars of hearing aid evaluation.The Freiburg monosyllabic test has been used for decades for hearing aid prescription and evaluation in German speaking countries. Relative and absolute targets can be individually defined for the rehabilitation of speech perception by hearing aids as assessed by the Freiburg monosyllabic test in quiet and at conversational levels.The general applicability of speech audiometric measurements in noise is limited. Alternative ("modern") methods and the definitions of noise situations relevant to everyday life have been discussed for years. However, the introduction of these methods into everyday use has proven difficult. On one hand, there is comparatively little practical experience; on the other, it has not yet been demonstrated what additional benefits these more complicated measurements might have for standard hearing aid evaluations and hearing aid users.

  7. Lexico-semantic effects on word naming in Persian: does age of acquisition have an effect?

    Science.gov (United States)

    Bakhtiar, Mehdi; Weekes, Brendan

    2015-02-01

    The age of acquisition (AoA) of a word has an effect on skilled reading performance. According to the arbitrary-mapping (AM) hypothesis, AoA effects on word naming are a consequence of arbitrary mappings between input and output in the lexical network. The AM hypothesis predicts that effects of AoA will be observed when words have unpredictable orthography-to-phonology (OP) mappings. The Persian writing system is characterized by a degree of consistency between OP mappings, making words transparent. However, the omission of vowels in the script used by skilled readers makes the OP mappings of many words unpredictable or opaque. In this study, we used factor analysis to test which lexico-semantic variables, including AoA, predict the reading aloud of monosyllabic Persian words with different spelling transparencies (transparent or opaque). Linear mixed-effect regression analysis revealed that a Lexical factor (loading on word familiarity, spoken frequency, and written frequency) and a Semantic factor (loading on AoA, imageability, and familiarity) significantly predict word-naming latencies in Persian. Further analysis revealed a significant interaction between AoA and transparency, with larger effects of AoA for opaque than for transparent words and a significant interaction between imageability and AoA on reading opaque words; that is, AoA effects are more pronounced for low-imageability opaque words than for high-imageability opaque words. Interactions between these factors and spelling transparency suggest that late-acquired opaque words receive greater input from the semantic reading route. Implications for understanding the AoA effects on word naming in Persian are discussed.

  8. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  9. Language Non-Selective Activation of Orthography during Spoken Word Processing in Hindi-English Sequential Bilinguals: An Eye Tracking Visual World Study

    Science.gov (United States)

    Mishra, Ramesh Kumar; Singh, Niharika

    2014-01-01

    Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…

  10. Taking the British Library Forward in the Twenty-First Century; Harvard's Library Digital Initiative: Building a First Generation Digital Library Infrastructure; Spoken Words, Unspoken Meanings: A DLI2 Project Ethnography; Resource Guide for the Social Sciences: Signposting a Dissemination and Support Route for Barefoot and Meta-Librarians in UK Higher Education.

    Science.gov (United States)

    Brindley, Lynne; Flecker, Dale; Seadle, Michael; Huxley, Lesly; Ford, Karen

    2000-01-01

    Includes four articles that discuss strategic planning in the British Library, including electronic strategies and collaborative partnerships; Harvard University's plans for a digital library infrastructure; the National Gallery of the Spoken Word, a Digital Library Initiative (DLI)-funded project that is language-related; and promoting networked…

  11. Children's Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    National Research Council Canada - National Science Library

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2015-01-01

    ...) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first- (L1...

  12. Children's Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    National Research Council Canada - National Science Library

    SA[paragraph]rqvist, Patrik; Ljung, Robert; Van de Poll, Marijke Keus; Hygge, Staffan; Pekkola, Elina P; Hurtig, Anders

    2016-01-01

    ...) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first- (L1...

  13. Childrens Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    National Research Council Canada - National Science Library

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2016-01-01

    ...) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants first- (L1...

  14. Children’s Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    National Research Council Canada - National Science Library

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2016-01-01

    ...) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1...

  15. Perceptual evidence for protracted development in monosyllabic Mandarin lexical tone production in preschool children in Taiwan.

    Science.gov (United States)

    Wong, Puisan

    2013-01-01

    This study used the same methodology in Wong [J. Speech Lang. Hear. Res. 55, 1423-1437 (2012b)] to examine the perceived accuracy of monosyllabic Mandarin tones produced by 4- and 5-year-old Mandarin-speaking children growing up in Taiwan and combined the findings with those of 3-year-olds reported in Wong [J. Speech Lang. Hear. Res. 55, 1423-1437 (2012b)] to track the development of monosyllabic tone production in preschool children. Tone productions of adults and children were collected in a picture naming task and low-pass filtered to remove lexical information and reserve tone information. Five native-speakers categorized the target tones in the filtered productions. Children's tone accuracy was compared to adults' to determine mastery and developmental changes. The results showed that preschool children in Taiwan have not fully mastered the production of monosyllabic Mandarin tones. None of the tones produced by the children in the three age groups reached adult-like accuracy. Little developmental change was found in children's tone accuracy during the preschool years. A similar order of accuracy of the tones was observed across the three age groups and the order appeared to follow the order of articulatory complexity in producing the tones. The findings suggest a protracted course of development in children's acquisition of Mandarin tones and that tone development may be constrained by physiological factors.

  16. Spoken Discourse Analysis

    Institute of Scientific and Technical Information of China (English)

    WANG Xi; CHEN Man-ping

    2015-01-01

    A number of approaches have been developed to analyze spoken discourse, based on different theoretical perspectives.I would classify the approaches into two categories according to their attitudes toward the nature of spoken discourse in this essay. The first group regards spoken discourse in a more static point of view, while the second one takes more consideration of the dynam⁃ic nature of it.

  17. Spoken Records. Third Edition.

    Science.gov (United States)

    Roach, Helen

    Surveying 75 years of accomplishment in the field of spoken recording, this reference work critically evaluates commercially available recordings selected for excellence of execution, literary or historical merit, interest, and entertainment value. Some types of spoken records included are early recording, documentaries, lectures, interviews,…

  18. Zipf and Heaps Laws in Human spoken English Language

    CERN Document Server

    Lin, Ruokuang; Bian, Chunhua

    2014-01-01

    Zipf law on word frequency and Heaps law on the growth of distinct words are widely observed in many different written languages. In this paper, via extensive analysis on the spoken transcriptions, we found the word frequency of the spoken transcriptions exhibits a power law distribution, which obeys Zipf law, and the growth of distinct words obeys the Heaps law. However, in speech, the usage of words are much more concentrated on some words, which leads to the larger probability of high frequency words than in books, and also when the content length grows, the emergence of new words does not increase as much as in books. These observations would be useful for speech analysis and modeling.

  19. Words, words, words!

    Science.gov (United States)

    2015-09-01

    Words matter. They are the "atoms" of written and oral communication. Students rely on words in textbooks and other instructional resources and in classroom lectures and discussions. As instructors, there are times when we need to think carefully about the words we use. Sometimes there are problems that may not be initially apparent and we may introduce confusion when we were aiming for clarity.

  20. Brazilian Portuguese Words for Design

    OpenAIRE

    Gies, Sheila; Cassidy, Tracy Diane

    2009-01-01

    Brazilian Portuguese is the Portuguese spoken in Brazil, which has slight differences from the Portuguese spoken in Portugal. One may try to understand such differences by comparing them with the dissimilarities between the American English and the British English. Although this article does not intend to establish potential differences between Brazilian Portuguese and Portuguese spoken in other countries, such as Portugal, it is important to bear in mind that divergences in meaning of words ...

  1. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  2. Spoken Grammar for Chinese Learners

    Institute of Scientific and Technical Information of China (English)

    徐晓敏

    2013-01-01

    Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.

  3. Effects of Rhyme and Spelling Patterns on Auditory Word ERPs Depend on Selective Attention to Phonology

    Science.gov (United States)

    Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.

    2013-01-01

    ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…

  4. Effects of Rhyme and Spelling Patterns on Auditory Word ERPs Depend on Selective Attention to Phonology

    Science.gov (United States)

    Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.

    2013-01-01

    ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…

  5. Achieving English Spoken Fluency

    Institute of Scientific and Technical Information of China (English)

    王鲜杰

    2000-01-01

    Language is first and foremost oral,spoken language,speaking skill is the most important one of the four skills(L,S,R,W)and also it is the most difficult one of the four skills. To have an all-round command of a language one must be able to speak and to understand the spoken language, it is not enough for a language learner only to have a good reading and writing skills. As Englisn language teachers, we need to focus on improving learners' English speaking skill to meet the need of our society and our country and provide learner some useful techniques to achieving their English spoken fluency. This paper focuses on the spoken how to improving learners speaking skill.

  6. Automatic translation among spoken languages

    Science.gov (United States)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  7. Monosyllabic Mandarin Tone Productions by 3-Year-Olds Growing up in Taiwan and in the United States: Interjudge Reliability and Perceptual Results

    Science.gov (United States)

    Wong, Puisan

    2012-01-01

    Purpose: The author compared monosyllabic Mandarin lexical tones produced by 3-year-old Mandarin-speaking children growing up in Taiwan and in the United States. Method: Following the procedures in Wong, Schwartz, and Jenkins (2005), the author collected monosyllabic tone productions from 3-year-old Mandarin-speaking children in Taiwan and…

  8. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  9. Syllabic Strategy as Opposed to Coda Optimization in the Segmentation of Spanish Letter-Strings Using Word Spotting

    Science.gov (United States)

    Álvarez, Carlos J.; Taft, Marcus; Hernández-Cabrera, Juan A.

    2017-01-01

    A word-spotting task is used in Spanish to test the way in which polysyllabic letter-strings are parsed in this language. Monosyllabic words (e.g., "bar") embedded at the beginning of a pseudoword were immediately followed by either a coda-forming consonant (e.g., "barto") or a vowel (e.g., "baros"). In the former…

  10. Communicating Emotion: Linking Affective Prosody and Word Meaning

    Science.gov (United States)

    Nygaard, Lynne C.; Queen, Jennifer S.

    2008-01-01

    The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming…

  11. Research on Spoken Dialogue Systems

    Science.gov (United States)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  12. Predictors of spoken language learning.

    Science.gov (United States)

    Wong, Patrick C M; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We found those who were successful to have higher activation in bilateral auditory cortex, larger volume in Heschl's Gyrus, and more accurate pitch pattern perception. All of these measures were performed before training began. In the second set of experiments, native English-speaking adults learned a phonological grammatical system governing the formation of words of an artificial language. Again, neurophysiological, neuroanatomical, and cognitive factors predicted to an extent how well these adults learned. Taken together, these experiments suggest that neural and behavioral factors can be used to predict spoken language learning. These predictors can inform the redesign of existing training paradigms to maximize learning for learners with different learning profiles. Readers will be able to: (a) understand the linguistic concepts of lexical tone and phonological grammar, (b) identify the brain regions associated with learning lexical tone and phonological grammar, and (c) identify the cognitive predictors for successful learning of a tone language and phonological rules. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Teaching the Spoken Language.

    Science.gov (United States)

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  14. Teaching the Spoken Language.

    Science.gov (United States)

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  15. Teaching Spoken Spanish

    Science.gov (United States)

    Lipski, John M.

    1976-01-01

    The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)

  16. ACTION RESEARCH : IMPROVING STUDENTS’ SPOKEN INTERACTIONS THROUGH POSTER SESSION

    Directory of Open Access Journals (Sweden)

    Seftika Seftika

    2015-10-01

    Full Text Available Abstract Spoken interaction is beneficial in learning a language. In fact the classrooom interaction did not take place well. Due to the lack of students’ interaction, this study aimed to improve students’ spoken interaction through Poster Session. A classroom action research was carried out at the English major students at the fourth semester of STKIP Muhammadiyah Pringsewu Lampung.In collecting the data the researcher used observation, test, and documentation. The data collected were analyzed and synthesized both qualitatively and quantitatively, and then meaning and interpretation were built to know clearly the process which was occurred during the research. The results indicate that there is improvement of Students’ spoken interaction using Poster Session. Poster Session facilitates students to practise English spoken interaction, it enhances them to be involved in learner-learner interaction. Due to the fact that their interaction is great, it influences their speaking skill.  Key Words: Interaction, speaking, Poster Session

  17. The Speech-Language Interface in the Spoken Language Translator

    CERN Document Server

    Carter, D; Carter, David; Rayner, Manny

    1994-01-01

    Abstract: The Spoken Language Translator is a prototype for practically useful systems capable of translating continuous spoken language within restricted domains. The prototype system translates air travel (ATIS) queries from spoken English to spoken Swedish and to French. It is constructed, with as few modifications as possible, from existing pieces of speech and language processing software. The speech recognizer and language understander are connected by a fairly conventional pipelined N-best interface. This paper focuses on the ways in which the language processor makes intelligent use of the sentence hypotheses delivered by the recognizer. These ways include (1) producing modified hypotheses to reflect the possible presence of repairs in the uttered word sequence; (2) fast parsing with a version of the grammar automatically specialized to the more frequent constructions in the training corpus; and (3) allowing syntactic and semantic factors to interact with acoustic ones in the choice of a meaning struc...

  18. Chinese spoken language understanding in SHTQS

    Institute of Scientific and Technical Information of China (English)

    MAO Jia-ju; GUO Rong; LU Ru-zhan

    2005-01-01

    Spoken dialogue systems are an active research field with wide applications. But the differences in the Chinese spoken dialogue system are not as distinct as that of English. In Chinese spoken dialogues, there are many language phenomena. Firstly, most utterances are ill-formed. Secondly, ellipsis, anaphora and negation are also widely used in Chinese spoken dialogue. Determining how to extract semantic information from incomplete sentences and resolve negation, anaphora and ellipsis is crucial. SHTQS ( Shanghai Transportation Query System) is an intelligent telephone-based spoken dialogue system providing information about the best route between any two sites in Shanghai. After a brief description of the system, the natural language processing is emphasized.Speech recognition sentences unavoidably contain errors. In language sequence processing procedures,these errors can be easily passed to the later parts and take on a ripple effect. To detect and recover these from errors as early as possible, language-processing strategies are specially considered. For errors resulting from divided words in speech recognition, segmentation and POS Tagging approaches that can rectify these errors are designed. Since most of the inquiry utterances are ill-formed and negation, anaphora and ellipsis are common language phenomena, the language understanding must be adequately adaptive. So, a partial syntactic parsing scheme is adopted and a chart algorithm is used. The parser is based on unification grammar. The semantic frame that extracts from the best arc set of the chart is used to represent the meaning of sentences. The negation, anaphora and ellipsis are also analyzed and corresponding processing approaches are presented. The accuracy of the language processing part is 88. 39% and the testing result shows that the language processing strategies are rational and effective.

  19. THE RECOGNITION OF SPOKEN MONO-MORPHEMIC COMPOUNDS IN CHINESE

    Directory of Open Access Journals (Sweden)

    Yu-da Lai

    2012-12-01

    Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.

  20. Phonemic Analysis: Effects of Word Properties.

    Science.gov (United States)

    Schreuder, Robert; van Bon, Wim H. J.

    The phonemic effects of word length, consonant-vowel structure, syllable structure, and meaning on word segmentation were investigated in two experiments with young children. The decentration hypothesis, which predicts that children who habitually direct their attention to word meaning would concentrate better at analyzing a spoken form without…

  1. Effects of Age and Experience on the Production of English Word-Final Stops by Korean Speakers

    Science.gov (United States)

    Baker, Wendy

    2010-01-01

    This study examined the effect of second language (L2) age of acquisition and amount of experience on the production of word-final stop consonant voicing by adult native Korean learners of English. Thirty learners, who differed in amount of L2 experience and age of L2 exposure, and 10 native English speakers produced 8 English monosyllabic words…

  2. Is spoken Danish less intelligible than Swedish?

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.

    2010-01-01

    The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is

  3. Influences of High and Low Variability on Infant Word Recognition

    Science.gov (United States)

    Singh, Leher

    2008-01-01

    Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural…

  4. Optimally efficient neural systems for processing spoken language.

    Science.gov (United States)

    Zhuang, Jie; Tyler, Lorraine K; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D

    2014-04-01

    Cognitive models claim that spoken words are recognized by an optimally efficient sequential analysis process. Evidence for this is the finding that nonwords are recognized as soon as they deviate from all real words (Marslen-Wilson 1984), reflecting continuous evaluation of speech inputs against lexical representations. Here, we investigate the brain mechanisms supporting this core aspect of word recognition and examine the processes of competition and selection among multiple word candidates. Based on new behavioral support for optimal efficiency in lexical access from speech, a functional magnetic resonance imaging study showed that words with later nonword points generated increased activation in the left superior and middle temporal gyrus (Brodmann area [BA] 21/22), implicating these regions in dynamic sound-meaning mapping. We investigated competition and selection by manipulating the number of initially activated word candidates (competition) and their later drop-out rate (selection). Increased lexical competition enhanced activity in bilateral ventral inferior frontal gyrus (BA 47/45), while increased lexical selection demands activated bilateral dorsal inferior frontal gyrus (BA 44/45). These findings indicate functional differentiation of the fronto-temporal systems for processing spoken language, with left middle temporal gyrus (MTG) and superior temporal gyrus (STG) involved in mapping sounds to meaning, bilateral ventral inferior frontal gyrus (IFG) engaged in less constrained early competition processing, and bilateral dorsal IFG engaged in later, more fine-grained selection processes.

  5. A Descriptive Study of Registers Found in Spoken and Written Communication (A Semantic Analysis)

    OpenAIRE

    Nurul Hidayah

    2016-01-01

    This research is descriptive study of registers found in spoken and written communication. The type of this research is Descriptive Qualitative Research. In this research, the data of the study is register in spoken and written communication that are found in a book entitled "Communicating! Theory and Practice" and from internet. The data can be in the forms of words, phrases and abbreviation. In relation with method of collection data, the writer uses the library method as her instrument. Th...

  6. The employment of a spoken language computer applied to an air traffic control task.

    Science.gov (United States)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  7. The impact of impaired vocal quality on children's ability to process spoken language.

    Science.gov (United States)

    Morton, V; Watson, D R

    2001-01-01

    This paper investigated the effect of voice quality on children's ability to process spoken language. A group of 24 children, mean age 11 years 5 months, listened to a series of recorded short passages, half spoken by a female with normal voice and half spoken by a female with a classic vocal impairment (dysphonic voice). The children were tested for their ability to recall words and to draw a final target inference. Children performed better on both preceding indices when listening to the normal voice. The implications of the findings are discussed, with particular reference to the classroom situation.

  8. How long-term memory and accentuation interact during spoken language comprehension.

    Science.gov (United States)

    Li, Xiaoqing; Yang, Yufang

    2013-04-01

    Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words).

  9. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  10. Prosodic Parallelism—Comparing Spoken and Written Language

    Science.gov (United States)

    Wiese, Richard

    2016-01-01

    The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015), this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  11. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  12. Comparing word processing times in naming, lexical decision, and progressive demasking:Evidence from Chronolex

    Directory of Open Access Journals (Sweden)

    Ludovic eFerrand

    2011-11-01

    Full Text Available We report performance measures for lexical decision, word naming, and progressive demasking for a large sample of monosyllabic, monomorphemic French words (N = 1,482. We compare the tasks and also examine the impact of word length, word frequency, initial phoneme, orthographic and phonological distance to neighbors, age-of-acquisition, and subjective frequency. Our results show that objective word frequency is by far the most important variable to predict reaction times in lexical decision. For word naming, it is the first phoneme. Progressive demasking was more influenced by a semantic variable (word imageability than lexical decision, but was also affected to a much greater extent by perceptual variables (word length, first phoneme/letters. This may reduce its usefulness as a psycholinguistic word recognition task.

  13. Comparing word processing times in naming, lexical decision, and progressive demasking: evidence from chronolex.

    Science.gov (United States)

    Ferrand, Ludovic; Brysbaert, Marc; Keuleers, Emmanuel; New, Boris; Bonin, Patrick; Méot, Alain; Augustinova, Maria; Pallier, Christophe

    2011-01-01

    We report performance measures for lexical decision (LD), word naming (NMG), and progressive demasking (PDM) for a large sample of monosyllabic monomorphemic French words (N = 1,482). We compare the tasks and also examine the impact of word length, word frequency, initial phoneme, orthographic and phonological distance to neighbors, age-of-acquisition, and subjective frequency. Our results show that objective word frequency is by far the most important variable to predict reaction times in LD. For word naming, it is the first phoneme. PDM was more influenced by a semantic variable (word imageability) than LD, but was also affected to a much greater extent by perceptual variables (word length, first phoneme/letters). This may reduce its usefulness as a psycholinguistic word recognition task.

  14. Semantic Access to Embedded Words? Electrophysiological and Behavioral Evidence from Spanish and English

    Science.gov (United States)

    Macizo, Pedro; Van Petten, Cyma; O'Rourke, Polly L.

    2012-01-01

    Many multisyllabic words contain shorter words that are not semantic units, like the CAP in HANDICAP and the DURA ("hard") in VERDURA ("vegetable"). The spaces between printed words identify word boundaries, but spurious identification of these embedded words is a potentially greater challenge for spoken language comprehension, a challenge that is…

  15. Adaptation to Pronunciation Variations in Indonesian Spoken Query-Based Information Retrieval

    Science.gov (United States)

    Lestari, Dessi Puji; Furui, Sadaoki

    Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.

  16. Towards a Framework for Teaching Spoken Grammar

    Science.gov (United States)

    Timmis, Ivor

    2005-01-01

    Since the advent of spoken corpora, descriptions of native speaker spoken grammar have become far more detailed and comprehensive. These insights, however, have been relatively slow to filter through to ELT practice. The aim of this article is to outline an approach to the teaching of native-speaker spoken grammar which is not only pedagogically…

  17. To Teach Spoken Grammar With Corpus Studies

    Institute of Scientific and Technical Information of China (English)

    CHENG Li

    2016-01-01

    Using scripted materials in spoken language teaching has been challenged in recent years. Accordingly, many scholars have proposed to employ corpus in spoken language teaching. This article proved it is an efficient way to teach spoken grammar by combining scripted materials with authentic materials from corpus.

  18. Factors Affecting Open-Set Word Recognition in Adults with Cochlear Implants

    OpenAIRE

    Holden, Laura K.; Finley, Charles C.; Firszt, Jill B.; Holden, Timothy A.; Brenner, Christine; Potts, Lisa G.; Gotter, Brenda D.; Vanderhoof, Sallie S.; Mispagel, Karen; Heydebrand, Gitry; Skinner, Margaret W.

    2013-01-01

    A monosyllabic word test was administered to 114 postlingually-deaf adult cochlear implant (CI) recipients at numerous intervals from two weeks to two years post-initial CI activation. Biographic/audiologic information, electrode position, and cognitive ability were examined to determine factors affecting CI outcomes. Results revealed that Duration of Severe-to-Profound Hearing Loss, Age at Implantation, CI Sound-field Threshold Levels, Percentage of Electrodes in Scala Vestibuli, Medio-later...

  19. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    Science.gov (United States)

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  20. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    Science.gov (United States)

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  1. The Perception of Assimilation in Newly Learned Novel Words

    Science.gov (United States)

    Snoeren, Natalie D.; Gaskell, M. Gareth; Di Betta, Anna Maria

    2009-01-01

    The present study investigated the mechanisms underlying perceptual compensation for assimilation in novel words. During training, participants learned canonical versions of novel spoken words (e.g., "decibot") presented in isolation. Following exposure to a second set of novel words the next day, participants carried out a phoneme…

  2. Estimating Performance of Pipelined Spoken Language Translation Systems

    CERN Document Server

    Rayner, M; Price, P; Lyberg, B; Rayner, Manny; Carter, David; Price, Patti; Lyberg, Bertil

    1994-01-01

    Most spoken language translation systems developed to date rely on a pipelined architecture, in which the main stages are speech recognition, linguistic analysis, transfer, generation and speech synthesis. When making projections of error rates for systems of this kind, it is natural to assume that the error rates for the individual components are independent, making the system accuracy the product of the component accuracies. The paper reports experiments carried out using the SRI-SICS-Telia Research Spoken Language Translator and a 1000-utterance sample of unseen data. The results suggest that the naive performance model leads to serious overestimates of system error rates, since there are in fact strong dependencies between the components. Predicting the system error rate on the independence assumption by simple multiplication resulted in a 16\\% proportional overestimate for all utterances, and a 19\\% overestimate when only utterances of length 1-10 words were considered.

  3. How reading acquisition changes children's spoken language network.

    Science.gov (United States)

    Monzalvo, Karla; Dehaene-Lambertz, Ghislaine

    2013-12-01

    To examine the influence of age and reading proficiency on the development of the spoken language network, we tested 6- and 9-years-old children listening to native and foreign sentences in a slow event-related fMRI paradigm. We observed a stable organization of the peri-sylvian areas during this time period with a left dominance in the superior temporal sulcus and inferior frontal region. A year of reading instruction was nevertheless sufficient to increase activation in regions involved in phonological representations (posterior superior temporal region) and sentence integration (temporal pole and pars orbitalis). A top-down activation of the left inferior temporal cortex surrounding the visual word form area, was also observed but only in 9year-olds (3years of reading practice) listening to their native language. These results emphasize how a successful cultural practice, reading, slots in the biological constraints of the innate spoken language network.

  4. When two newly-acquired words are one: New words differing in stress alone are not automatically represented differently

    NARCIS (Netherlands)

    Sulpizio, S.; McQueen, J.M.

    2011-01-01

    Do listeners use lexical stress at an early stage in word learning? Artificial-lexicon studies have shown that listeners can learn new spoken words easily. These studies used non-words differing in consonants and/or vowels, but not differing only in stress. If listeners use stress information in

  5. Is there pain in champagne? Semantic involvement of words within words during sense-making

    NARCIS (Netherlands)

    van Alphen, P.M.; van Berkum, J.J.A.

    2010-01-01

    In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e.g., pie in pirate) or at their offsets (e.g., pain in champagne). In the experiment, Dutch

  6. Micro-controller based Remote Monitoring using Mobile through Spoken Commands

    Directory of Open Access Journals (Sweden)

    Naresh P Jawarkar

    2008-02-01

    Full Text Available Mobile phone can serve as powerful tool for world-wide communication. A system is developed to remotely monitor process through spoken commands using mobile. Mel cepstrum features are extracted from spoken words. Learning Vector Quantization Neural Network is used for recognition of various words used in the command. The accuracy of spoken commands is about 98%. A text message is generated and sent to control system mobile in form of SMS. On receipt of SMS, control system mobile informs AVR micro-controller based card, which performs specified task. The system alerts user in case of occurrence of any abnormal conditions like power failure, loss of control, etc. Other applications where this approach can be extended are also discussed.

  7. Word semantics is processed even without attentional effort.

    Science.gov (United States)

    Relander, Kristiina; Rämä, Pia; Kujala, Teija

    2009-08-01

    We examined the attentional modulation of semantic priming and the N400 effect for spoken words. The aim was to find out how the semantics of spoken language is processed when attention is directed to another modality (passive task), to the phonetics of spoken words (phonological task), or to the semantics of spoken words (word task). Equally strong behavioral priming effects were obtained in the phonological and the word tasks. A significant N400 effect was found in all tasks. The effect was stronger in the word and the phonological tasks than in the passive task, but there was no difference in the magnitude of the effect between the phonological and the word tasks. The latency of the N400 effect did not differ between the tasks. Although the N400 effect had a centroparietal maximum in the phonological and the word tasks, it was largest at the parietal recording sites in the passive task. The effect was more pronounced at the left than right recording sites in the phonological task, but there was no laterality effect in the other tasks. The N400 effect in the passive task indicates that semantic priming occurs even when spoken words are not actively attended. However, stronger N400 effect in the phonological and the word tasks than in the passive task suggests that controlled processes modulate the N400 effect. The finding that there were no differences in the N400 effect between the phonological and the word tasks indicates that the semantics of attended spoken words is processed regardless of whether semantic processing is relevant for task performance.

  8. Voice congruency facilitates word recognition.

    Directory of Open Access Journals (Sweden)

    Sandra Campeanu

    Full Text Available Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  9. Spoken Dialogue Interfaces: Integrating Usability

    Science.gov (United States)

    Spiliotopoulos, Dimitris; Stavropoulou, Pepi; Kouroupetroglou, Georgios

    Usability is a fundamental requirement for natural language interfaces. Usability evaluation reflects the impact of the interface and the acceptance from the users. This work examines the potential of usability evaluation in terms of issues and methodologies for spoken dialogue interfaces along with the appropriate designer-needs analysis. It unfolds the perspective to the usability integration in the spoken language interface design lifecycle and provides a framework description for creating and testing usable content and applications for conversational interfaces. Main concerns include the problem identification of design issues for usability design and evaluation, the use of customer experience for the design of voice interfaces and dialogue, and the problems that arise from real-life deployment. Moreover it presents a real-life paradigm of a hands-on approach for applying usability methodologies in a spoken dialogue application environment to compare against a DTMF approach. Finally, the scope and interpretation of results from both the designer and the user standpoint of usability evaluation are discussed.

  10. Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure

    NARCIS (Netherlands)

    Orfanidou, E.; Adam, R.; Morgan, G.; McQueen, J.M.

    2010-01-01

    Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. A

  11. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    Science.gov (United States)

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  12. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    Science.gov (United States)

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  13. Social interaction facilitates word learning in preverbal infants: Word-object mapping and word segmentation.

    Science.gov (United States)

    Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo

    2017-08-01

    In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Disclosing spoken culture: user interfaces for access to spoken word archives

    NARCIS (Netherlands)

    Heeren, W.F.L.; de Jong, Franciska M.G.

    Over the past century alone, we have collected millions of hours of audiovisual data with great potential for e.g., new creative productions, research and educational purposes. The actual (re-)use of these collections, however, is severely hindered by their generally limited access. In this paper a

  15. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  16. The Self-Organization of a Spoken Word

    Directory of Open Access Journals (Sweden)

    John G. eHolden

    2012-07-01

    Full Text Available Pronunciation time probability density and hazard functions from large speeded wordnaming data sets were assessed for empirical patterns consistent with multiplicative andreciprocal feedback dynamics—interaction dominant dynamics. Lognormal and inversepower-law distributions are associated with multiplicative and interdependent dynamicsin many natural systems. Mixtures of lognormal and inverse power-law distributionsoffered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald—alternatives corresponding to additive, superposed, component processes. Theevidence for interaction dominant dynamics suggests fundamental links between theobserved coordinative synergies that support speech production and the shapes ofpronunciation time distributions.

  17. Workshop on the Transition from Speech Sounds to Spoken Words

    Science.gov (United States)

    1990-07-06

    aspects of the signal. There appear to be three general processes that are important to learning the phonological and phonetic properties of one’s... phonetic universals to language-specific phonology Catherine Best Wesleyan University and Haskins Laboratories Both general observation and empirical...Transitions from phonetic universals to language-specific phonology Anne Cutler Marked and unmarked segmentation strategies? Peter Jusczyk University of

  18. Lexical support for phonetic perception during nonnative spoken word recognition.

    Science.gov (United States)

    Samuel, Arthur G; Frost, Ram

    2015-12-01

    Second language comprehension is generally not as efficient and effective as native language comprehension. In the present study, we tested the hypothesis that lower-level processes such as lexical support for phonetic perception are a contributing factor to these differences. For native listeners, it has been shown that the perception of ambiguous acoustic–phonetic segments is driven by lexical factors (Samuel Psychological Science, 12, 348-351, 2001). Here, we tested whether nonnative listeners can use lexical context in the same way. Native Hebrew speakers living in Israel were tested with American English stimuli. When subtle acoustic cues in the stimuli worked against the lexical context, these nonnative speakers showed no evidence of lexical guidance of phonetic perception. This result conflicts with the performance of native speakers, who demonstrate lexical effects on phonetic perception even with conflicting acoustic cues. When stimuli without any conflicting cues were used, the native Hebrew subjects produced results similar to those of native English speakers, showing lexical support for phonetic perception in their second language. In contrast, native Arabic speakers, who were less proficient in English than the native Hebrew speakers, showed no ability to use lexical activation to support phonetic perception, even without any conflicting cues. These results reinforce previous demonstrations of lexical support of phonetic perception and demonstrate how proficiency modulates the use of lexical information in driving phonetic perception.

  19. Language Differentiation Based on Sound Patterns of the Spoken Word

    Science.gov (United States)

    1976-03-01

    the Right Ear (Czermak): G, external auditory meatus; T, membrana tympani; P, tympanic cavity; o, fenestra ovalis; R, FENESTRA ROTUNDA; B, SEMICIRCULAR...as necessary. The Eustachian) tube (E in figure 2) insures that there is equal pressure on both- sides of the membrana tympani (eardrum) . The

  20. Cognitive aging and hearing acuity: modeling spoken language comprehension.

    Science.gov (United States)

    Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  1. Locus of Word Frequency Effects in Spelling to Dictation: Still at the Orthographic Level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-01-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological…

  2. Phonological and Semantic Knowledge Are Causal Influences on Learning to Read Words in Chinese

    Science.gov (United States)

    Zhou, Lulin; Duff, Fiona J.; Hulme, Charles

    2015-01-01

    We report a training study that assesses whether teaching the pronunciation and meaning of spoken words improves Chinese children's subsequent attempts to learn to read the words. Teaching the pronunciations of words helps children to learn to read those same words, and teaching the pronunciations and meanings improves learning still further.…

  3. Use of orthography in spoken naming in aphasia: a case study.

    Science.gov (United States)

    Dean, Michael P

    2010-12-01

    An unusual pattern of responding by a woman with aphasia was analyzed with respect to cognitive neuropsychologic models of language processing. Spontaneous spelling aloud in spoken naming tasks has been reported in a small number of earlier cases. C.P. exhibited this behavior and, in addition, produced attempts at assembled phonologic naming that reflected errors in oral spelling. Assessment on a variety of single word-processing tasks and analysis of variables' underlying performance were carried out. The assessment revealed greater impairment to phonologic than to orthographic output lexical representations, and a less errorful route to spoken responses by spelling aloud and by assembling responses from grapheme-to-phoneme conversion. C.P.'s skills changed over time, and when written naming ceased to hold an advantage over spoken naming, the use of orthographic information in spoken naming ceased. C.P.'s performance supports the existence of separate orthographic and phonological lexicons, argues against the phonological mediation of spelling, and, as orthography in spoken naming seemed to be used strategically, shows some limits on the interaction of components within models of single word processing.

  4. Words Get in the Way: Linguistic Effects on Talker Discrimination.

    Science.gov (United States)

    Narayan, Chandan R; Mak, Lorinda; Bialystok, Ellen

    2016-07-22

    A speech perception experiment provides evidence that the linguistic relationship between words affects the discrimination of their talkers. Listeners discriminated two talkers' voices with various linguistic relationships between their spoken words. Listeners were asked whether two words were spoken by the same person or not. Word pairs varied with respect to the linguistic relationship between the component words, forming either: phonological rhymes, lexical compounds, reversed compounds, or unrelated pairs. The degree of linguistic relationship between the words affected talker discrimination in a graded fashion, revealing biases listeners have regarding the nature of words and the talkers that speak them. These results indicate that listeners expect a talker's words to be linguistically related, and more generally, indexical processing is affected by linguistic information in a top-down fashion even when listeners are not told to attend to it.

  5. Interaction Hypothesis and Spoken English Teaching

    Institute of Scientific and Technical Information of China (English)

    郭菲菲

    2013-01-01

    Spoken English is one of the most practical skill that students need to obtain.However there exist many problems in Spoken English Teaching in China ,one of the most serious problem is that it lacks sufficient practice.According to the interaction hypothesis (Long, Gass), second language acquisition occurs when learners interact in conversation with native speakers and/or each other.Based on this hypothesis,the author presents some new insights for improving Spoken English Teaching and discusses its enlightenment in Spoken English Teaching Classroom.

  6. Some words on Word

    NARCIS (Netherlands)

    Janssen, Maarten; Visser, A.

    2008-01-01

    In many disciplines, the notion of a word is of central importance. For instance, morphology studies le mot comme tel, pris isol´ement (Mel’ˇcuk, 1993 [74]). In the philosophy of language the word was often considered to be the primary bearer of meaning. Lexicography has as its fundamental role to c

  7. How to Improve Spoken English

    Institute of Scientific and Technical Information of China (English)

    郑瑜

    2015-01-01

    Undoubtedly,English has become more and more important in our daily lives,and how to communicate with others in English fluently has aroused general concern.Picking up a second language is always not that easy.Ironically,the truth is that many Chinese realize it is a big problem to speak English fluently although they can easily get a high score in some English written exams.Spoken English is so important that I mainly introduce some effective methods to practice and improve it in this essay.

  8. How to Improve Spoken English

    Institute of Scientific and Technical Information of China (English)

    郑瑜

    2015-01-01

    Undoubtedly,English has become more and more important in our daily lives,and how to communicate with others in English fluently has aroused general concern.Picking up a second language is always not that easy.Ironically,the truth is that many Chinese realize it is a big problem to speak English fluently although they can easily get a high score in some English written exams.Spoken English is so important that I mainly introduce some effective methods to practice and improve it in this essay .

  9. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    Science.gov (United States)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  11. Comprehending spoken metaphoric reference: a real-time analysis.

    Science.gov (United States)

    Stewart, Mark T; Heredia, Roberto R

    2002-01-01

    Speakers and writers often use metaphor to describe someone or something in a referential fashion (e.g., The creampuff didn't show up for the fight to refer to a cowardly boxer). Research has demonstrated that readers do not comprehend metaphoric reference as easily as they do literal reference (Gibbs, 1990; Onishi & Murphy, 1993). In two experiments, we used a naming version of the cross-modal lexical priming (CMLP) paradigm to monitor the time-course of comprehending spoken metaphoric reference. In Experiment 1, listeners responded to visual probe words of either a figurative or literal nature that were presented at offset or 1000 ms after a critical prime word. Significant facilitatory priming was observed at prime offset to probes consistent with the metaphorical interpretation of the figuratively referring description, yet no priming was found for either probe type at the downstream location. In Experiment 2, we partially replicated Experiment 1 results at prime offset and found no priming at a probe point placed 1000 ms upstream from prime onset. Taken together, the data from these two experiments indicate that listeners are able to comprehend metaphoric reference faster than literal reference. Moreover, the effect appears to be strongest at prime offset, suggesting that activation of the nonliteral interpretation is closely tied to the relationship between the figuratively referring description and the intended referent. Implications for theories of metaphor comprehension, as well as for research in spoken metaphor, are discussed.

  12. How Do Raters Judge Spoken Vocabulary?

    Science.gov (United States)

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  13. Reader for Advanced Spoken Tamil. Final Report.

    Science.gov (United States)

    Schiffman, Harold

    This final report describes the development of a textbook for advanced, spoken Tamil. There is a marked dirrerence between literary Tamil and spoken Tamil, and training in the former is not sufficient for speaking the language in everyday situations with reasonably educated native speakers. There is difficulty in finding suitable material that…

  14. Electrophysiological evidence for prelinguistic infants' word recognition in continuous speech

    NARCIS (Netherlands)

    Kooijman, V.M.; Hagoort, P.; Cutler, A.

    2005-01-01

    Children begin to talk at about age one. The vocabulary they need to do so must be built on perceptual evidence and, indeed, infants begin to recognize spoken words long before they talk. Most of the utterances infants hear, however, are continuous, without pauses between words, so constructing a vo

  15. A Descriptive Study of Registers Found in Spoken and Written Communication (A Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Nurul Hidayah

    2016-07-01

    Full Text Available This research is descriptive study of registers found in spoken and written communication. The type of this research is Descriptive Qualitative Research. In this research, the data of the study is register in spoken and written communication that are found in a book entitled "Communicating! Theory and Practice" and from internet. The data can be in the forms of words, phrases and abbreviation. In relation with method of collection data, the writer uses the library method as her instrument. The writer relates it to the study of register in spoken and written communication. The technique of analyzing the data using descriptive method. The types of register in this term will be separated into formal register and informal register, and identify the meaning of register.

  16. 听觉呈现条件下汉语双字词语义和语音启动的事件相关电位研究%Event-related potential study on the semantic and phonological priming with spoken two-character Chinese words

    Institute of Scientific and Technical Information of China (English)

    吕勇; 杜英春; 宋娟; 沈德立

    2007-01-01

    识别过程中首音具有特殊意义的看法提供了支持,不过该理论应用于汉语双字词识别需要进行一定的修正.④本实验没有发现语义启动语音启动具有不同脑内源的证据.%BACKGROUND:Researchers have done much work to investigate semantic priming with event-related potentials (ERPs) method. The ERPs component of N400 is of great importance in this research domain. N400 is a negative wave occurs at about 400 ms after the stimulus onset. It has been accepted that N400 represents the processing of semantic information. In many studies, amplitude of N400 could be reduced by semantic priming. Relatively, ERPs studies on phonological priming, especially by auditory stimuli, deserve to be further investigated. OBJECTIVE: To investigate the EPR characteristics of semantic and phonological priming with spoken two-character Chinese words, and also to testify theories about auditory word recognition.DESIGN: Repeated measurement experiment.SETTING: Center for Psychology and Behavior Studies, Tianjin Normal University. PARTICIPANTS: This experiment was carried out between August and October 2003 in Tianjin Normal University. Seventeen healthy college students (8 male and 9 female, age ranging from 19 to 23 years) with no hearing defect were involved in this experiment. All of them were Chinese native speakers. All except one male participant were right handed. Informed consents of detected items were obtained from all the participants.METHODS: In the study, lexical dicision task which required participants was used to judge if the latter words were real words or pseudowords in the auditorily presented word-pairs by pressing buttons. The stimuli materials were 640 two-character word-pairs including semantic related, initial phonological overlap, final phonological overlap, phonological and semantic unrelated and word-pseudoword (control) conditions. These five kinds of word-pairs presented randomly in the experiment. The presentation

  17. Immediate lexical integration of novel word forms

    Science.gov (United States)

    Kapnoula, Efthymia C.; McMurray, Bob

    2014-01-01

    It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Exp 1) or passive (Exp 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants’ fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. PMID:25460382

  18. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing.

  19. Signal Words

    Science.gov (United States)

    SIGNAL WORDS TOPIC FACT SHEET NPIC fact sheets are designed to answer questions that are commonly asked by the ... making decisions about pesticide use. What are Signal Words? Signal words are found on pesticide product labels, ...

  20. Is "huh?" a universal word? Conversational infrastructure and the convergent evolution of linguistic items

    National Research Council Canada - National Science Library

    Dingemanse, Mark; Torreira, Francisco; Enfield, N J

    2013-01-01

    A word like Huh?--used as a repair initiator when, for example, one has not clearly heard what someone just said--is found in roughly the same form and function in spoken languages across the globe...

  1. Is "Huh?" a Universal Word? Conversational Infrastructure and the Convergent Evolution of Linguistic Items: e78273

    National Research Council Canada - National Science Library

    Mark Dingemanse; Francisco Torreira; N J Enfield

    2013-01-01

      A word like Huh?-used as a repair initiator when, for example, one has not clearly heard what someone just said- is found in roughly the same form and function in spoken languages across the globe...

  2. Literacy affects spoken language in a non-linguistic task: An ERP study

    Directory of Open Access Journals (Sweden)

    Laetitia ePerre

    2011-10-01

    Full Text Available It is now commonly accepted that orthographic information influences spoken word recognition in a variety of laboratory tasks (lexical decision, semantic categorization, gender decision. However, it remains a hotly debated issue whether or not orthography would influence normal word perception in passive listening. That is, the argument has been made that orthography might only be activated in laboratory tasks that require lexical or semantic access in some form or another. It is possible that these rather unnatural tasks invite participants to use orthographic information in a strategic way to improve task performance. To put the strategy account to rest, we conducted an event-related brain potential (ERP study, in which participants were asked to detect a 500ms-long noise burst that appeared on 25% of the trials (Go trials. In the NoGo trials, we presented spoken words that were orthographically consistent or inconsistent. Thus, lexical and/or semantic processing was not required in this task and there was no strategic benefit in computing orthography to perform this task. Nevertheless, despite the non-linguistic nature of the task, we replicated the consistency effect that has been previously reported in lexical decision and semantic tasks (i.e., inconsistent words produce more negative ERPs than consistent words as early as 300 ms after the onset of the spoken word.. These results clearly suggest that orthography automatically influences word perception in normal listening even if there is no strategic benefit to do so. The results are explained in terms of orthographic restructuring of phonological representations.

  3. Cantonese-Speaking Children Do Not Acquire Tone Perception before Tone Production-A Perceptual and Acoustic Study of Three-Year-Olds' Monosyllabic Tones.

    Science.gov (United States)

    Wong, Puisan; Fu, Wing M; Cheung, Eunice Y L

    2017-01-01

    Models of phonological development assume that speech perception precedes speech production and that children acquire suprasegmental features earlier than segmental features. Studies of Chinese-speaking children challenge these assumptions. For example, Chinese-speaking children can produce tones before two-and-a-half years but are not able to discriminate the same tones until after 6 years of age. This study compared the perception and production of monosyllabic Cantonese tones directly in 3 -year-old children. Twenty children and their mothers identified Cantonese tones in a picture identification test and produced monosyllabic tones in a picture labeling task. To control for lexical biases on tone ratings, the mother- and child-productions were low-pass filtered to eliminate lexical information and were presented to five judges for tone classification. Detailed acoustic analysis was performed. Contrary to the view that children master lexical tones earlier than segmental phonemes, results showed that 3-year-old children could not perceive or produce any Cantonese tone with adult-like proficiency and incorrect tone productions were acoustically different from criterion. In contrast to previous findings that Cantonese-speaking children mastered tone production before tone perception, we observed more accuracy during speech perception than production. Findings from Cantonese-speaking children challenge some of the established tenets in theories of phonological development that have been tested mostly with native English speakers.

  4. (Almost) Word for Word: As Voice Recognition Programs Improve, Students Reap the Benefits

    Science.gov (United States)

    Smith, Mark

    2006-01-01

    Voice recognition software is hardly new--attempts at capturing spoken words and turning them into written text have been available to consumers for about two decades. But what was once an expensive and highly unreliable tool has made great strides in recent years, perhaps most recognized in programs such as Nuance's Dragon NaturallySpeaking…

  5. Enhancing spoken connected-digit recognition accuracy by error correction codes – A novel scheme

    Indian Academy of Sciences (India)

    Sunil K Kopparapu; P V S Rao

    2004-10-01

    Recognizing spoken connected-digit numbers accurately is an important problem and has very many applications. Though state-of-the-art word recognition systems have gained acceptable accuracy levels, the accuracy of recognition of current connected spoken digits (and other short words) is very poor. In this paper, we develop a novel scheme to enhance the accuracy of recognizing a connected number. The basic idea proposed in this paper is to increase the number of digits in a number and use these appended digits to increase the overall accuracy of recognizing the number, as is done in the error-correcting code literature. We further show that the developed scheme is able to uniquely and exactly correct single-digit errors.

  6. Predictors of Spoken Language Learning

    Science.gov (United States)

    Wong, Patrick C. M.; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We…

  7. SPOKEN CORPORA AND ANALYSIS OF NATURAL SPEECH

    Directory of Open Access Journals (Sweden)

    Shu-Chuan Tseng

    2008-12-01

    Full Text Available This paper introduces spoken corpora of Taiwan Mandarin created at Academia Sinica and gives an overview of some recent studies carried out utilizing the spoken data. Spoken language resources of Taiwan Mandarin have been collected and processed at Academia Sinica since 2001. As a result, spoken data, which are useful not only for language archives purpose, but also for linguistic studies, has been made available. In addition to creation of the corpus, two lines of research are discussed in which theoretical and empirical studies are connected by using the aforementioned language resources: 1 language variation and change and 2 spoken discourse analysis. Phonetic reduction is one of the main reasons for changes within a language and it is important to take into account different levels of variations in spontaneous speech. For this purpose, we studied syllable contraction/merger, vowel reduction, and phonetic reduction in directional complements. Discourse items also play an essential part, because they add specific implications to sentences and their use is mainly marked by prosodic means. We segmented a spoken discourse into smaller prosodic units to allow for a more precise study of discourse items, prosodic features, and disfluency. These issues are correlated with each other, especially through prosodic markings.

  8. Cognitive aging and hearing acuity: Modeling spoken language comprehension

    Directory of Open Access Journals (Sweden)

    Arthur eWingfield

    2015-06-01

    Full Text Available The comprehension of spoken language has been characterized by a number of local theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The Ease of Language Understanding (ELU model (Rönnberg et al., 2013 stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we examine aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout our discussion our goal is to offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  9. Predictors of spoken language learning

    OpenAIRE

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We found those who were successful to have higher activation in bilateral auditory cortex, larger volume in Heschl’s Gyrus, and more accurate pitch pa...

  10. Spoken Document Retrieval Leveraging Unsupervised and Supervised Topic Modeling Techniques

    Science.gov (United States)

    Chen, Kuan-Yu; Wang, Hsin-Min; Chen, Berlin

    This paper describes the application of two attractive categories of topic modeling techniques to the problem of spoken document retrieval (SDR), viz. document topic model (DTM) and word topic model (WTM). Apart from using the conventional unsupervised training strategy, we explore a supervised training strategy for estimating these topic models, imagining a scenario that user query logs along with click-through information of relevant documents can be utilized to build an SDR system. This attempt has the potential to associate relevant documents with queries even if they do not share any of the query words, thereby improving on retrieval quality over the baseline system. Likewise, we also study a novel use of pseudo-supervised training to associate relevant documents with queries through a pseudo-feedback procedure. Moreover, in order to lessen SDR performance degradation caused by imperfect speech recognition, we investigate leveraging different levels of index features for topic modeling, including words, syllable-level units, and their combination. We provide a series of experiments conducted on the TDT (TDT-2 and TDT-3) Chinese SDR collections. The empirical results show that the methods deduced from our proposed modeling framework are very effective when compared with a few existing retrieval approaches.

  11. Event-related potential evidence on the influence of accentuation in spoken discourse comprehension in Chinese.

    Science.gov (United States)

    Li, Xiaoqing; Hagoort, Peter; Yang, Yufang

    2008-05-01

    In an event-related potential experiment with Chinese discourses as material, we investigated how and when accentuation influences spoken discourse comprehension in relation to the different information states of the critical words. These words could either provide new or old information. It was shown that variation of accentuation influenced the amplitude of the N400, with a larger amplitude for accented than for deaccented words. In addition, there was an interaction between accentuation and information state. The N400 amplitude difference between accented and deaccented new information was smaller than that between accented and deaccented old information. The results demonstrate that, during spoken discourse comprehension, listeners rapidly extract the semantic consequences of accentuation in relation to the previous discourse context. Moreover, our results show that the N400 amplitude can be larger for correct (new, accented words) than incorrect (new, deaccented words) information. This, we argue, proves that the N400 does not react to semantic anomaly per se, but rather to semantic integration load, which is higher for new information.

  12. Learning word meanings: overnight integration and study modality effects.

    Directory of Open Access Journals (Sweden)

    Frauke van der Ven

    Full Text Available According to the complementary learning systems (CLS account of word learning, novel words are rapidly acquired (learning system 1, but slowly integrated into the mental lexicon (learning system 2. This two-step learning process has been shown to apply to novel word forms. In this study, we investigated whether novel word meanings are also gradually integrated after acquisition by measuring the extent to which newly learned words were able to prime semantically related words at two different time points. In addition, we investigated whether modality at study modulates this integration process. Sixty-four adult participants studied novel words together with written or spoken definitions. These words did not prime semantically related words directly following study, but did so after a 24-hour delay. This significant increase in the magnitude of the priming effect suggests that semantic integration occurs over time. Overall, words that were studied with a written definition showed larger priming effects, suggesting greater integration for the written study modality. Although the process of integration, reflected as an increase in the priming effect over time, did not significantly differ between study modalities, words studied with a written definition showed the most prominent positive effect after a 24-hour delay. Our data suggest that semantic integration requires time, and that studying in written format benefits semantic integration more than studying in spoken format. These findings are discussed in light of the CLS theory of word learning.

  13. Learning word meanings: overnight integration and study modality effects.

    Science.gov (United States)

    van der Ven, Frauke; Takashima, Atsuko; Segers, Eliane; Verhoeven, Ludo

    2015-01-01

    According to the complementary learning systems (CLS) account of word learning, novel words are rapidly acquired (learning system 1), but slowly integrated into the mental lexicon (learning system 2). This two-step learning process has been shown to apply to novel word forms. In this study, we investigated whether novel word meanings are also gradually integrated after acquisition by measuring the extent to which newly learned words were able to prime semantically related words at two different time points. In addition, we investigated whether modality at study modulates this integration process. Sixty-four adult participants studied novel words together with written or spoken definitions. These words did not prime semantically related words directly following study, but did so after a 24-hour delay. This significant increase in the magnitude of the priming effect suggests that semantic integration occurs over time. Overall, words that were studied with a written definition showed larger priming effects, suggesting greater integration for the written study modality. Although the process of integration, reflected as an increase in the priming effect over time, did not significantly differ between study modalities, words studied with a written definition showed the most prominent positive effect after a 24-hour delay. Our data suggest that semantic integration requires time, and that studying in written format benefits semantic integration more than studying in spoken format. These findings are discussed in light of the CLS theory of word learning.

  14. Coherence relations in academic spoken discourse

    Directory of Open Access Journals (Sweden)

    Juliano Desiderato Antonio

    2012-12-01

    Full Text Available According to Rhetorical Structure Theory, implicit propositions emerge from the combination of pieces of text which hang together. Implicit propositions have received various labels as coherence relations, discourse relations, rhetorical relations or relational propositions. When two portions of a text hold a relation, the addressee of the text may recognize the connection even without the presence of a formal sign as a conjunction or a discourse marker. In this paper we claim that some intrinsic spoken discourse phenomena like paraphrasing, repetition, correction and parenthetical insertion hold coherence relations with other portions of discourse and, thus, may be considered strategies for the construction of coherence. The analysis, based on academic spoken discourse (five university lectures in Brazilian Portuguese, shows that these phenomena are recurring and relevant for the study of spoken discourse.

  15. Attentional capture by spoken language: effects on netballers' visual task performance.

    Science.gov (United States)

    Bishop, Daniel Tony; Moore, Sarah; Horne, Sara; Teszka, Robert

    2014-01-01

    In two experiments, participants performed visual detection, visual discrimination and decision-making tasks, in which a binary (left/right) response was required. In all experimental conditions, a spoken word ("left"/"right") was presented monaurally (left or right ear) at the onset of the visual stimulus. In Experiment 1, 26 non-athletes located a target amongst an array of distractors as quickly as possible, in both the presence and absence of spoken cues. Participants performed superiorly in the presence of valid cues, relative to invalid-cue and control conditions. In Experiment 2, 42 skilled netballers completed three tasks, in randomised order: a visual detection task, a visual discrimination task and a netball decision-making task - all in the presence of spoken cues. Our data showed that spoken auditory cues affected not only target detection, but also performance on more complex decision-making tasks: cues that were either spatially or semantically invalid slowed target detection time; spatially invalid cues impaired discrimination task accuracy; and cues that were either spatially or semantically valid improved accuracy and speeded decision-making time in the netball task. When studying visual perception and attention in sport, the impact of concomitant auditory information should be taken into account in order to achieve a more representative task design.

  16. WORD MAGIC

    Institute of Scientific and Technical Information of China (English)

    Zhao; Xinmin

    1999-01-01

    This article presents a word game named"Word Magic",which is effective and efficient inavoiding word forgetting & decaying as well as helping students to improve their abilities in spelling,word building and so on.The procedures and rules of the game are formulated together with the makingof the cards used in it.The advantages of the game are also expounded.

  17. Spoken Language Understanding Software for Language Learning

    Directory of Open Access Journals (Sweden)

    Hassan Alam

    2008-04-01

    Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.

  18. A statistical learning algorithm for word segmentation

    CERN Document Server

    Van Aken, Jerry R

    2011-01-01

    In natural speech, the speaker does not pause between words, yet a human listener somehow perceives this continuous stream of phonemes as a series of distinct words. The detection of boundaries between spoken words is an instance of a general capability of the human neocortex to remember and to recognize recurring sequences. This paper describes a computer algorithm that is designed to solve the problem of locating word boundaries in blocks of English text from which the spaces have been removed. This problem avoids the complexities of processing speech but requires similar capabilities for detecting recurring sequences. The algorithm that is described in this paper relies entirely on statistical relationships between letters in the input stream to infer the locations of word boundaries. The source code for a C++ version of this algorithm is presented in an appendix.

  19. Activation of words with phonological overlap

    Directory of Open Access Journals (Sweden)

    Claudia K. Friedrich

    2013-08-01

    Full Text Available Multiple lexical representations overlapping with the input (cohort neighbors are temporarily activated in the listener’s mental lexicon when speech unfolds in time. Activation for cohort neighbors appears to rapidly decline as soon as there is mismatch with the input. However, it is a matter of debate whether or not they are completely excluded from further processing. We recorded behavioral data and event-related brain potentials (ERPs in auditory-visual word onset priming during a lexical decision task. As primes we used the first two syllables of spoken German words. In a carrier word condition, the primes were extracted from spoken versions of the target words (ano-ANORAK 'anorak'. In a cohort neighbor condition, the primes were taken from words that overlap with the target word up to the second nucleus (ana- taken from ANANAS 'pineapple'. Relative to a control condition, where primes and targets were unrelated, lexical decision responses for cohort neighbors were delayed. This reveals that cohort neighbors are disfavored by the decision processes at the behavioral front end. In contrast, left-anterior ERPs reflected long-lasting facilitated processing of cohort neighbors. We interpret these results as evidence for extended parallel processing of cohort neighbors. That is, in parallel to the preparation and elicitation of delayed lexical decision responses to cohort neighbors, aspects of the processing system appear to keep track of those less efficient candidates.

  20. Comments on Nigel Wiseman's A Practical Dictionary of Chinese Medicine (Ⅰ)--On the "Word-for-word" Literal Approach to Translation

    Institute of Scientific and Technical Information of China (English)

    XIE Zhu-fan; WHITE Paul

    2005-01-01

    Comments were made on the "word-for-word" literal translation method used by Mr. Nigel Wiseman in A Practical Dictionary of Chinese Medicine. He believes that only literal translation can reflect Chinese medical concepts accurately. The so-called "word-for-word" translation is actually "English-wordfor- Chinese-character" translation. First, the authors of the dictionary made a list of Single Characters with English Equivalents, and then they gave each character of the medical term an English equivalent according to the list. Finally, they made some minor modifications to make the rendering grammatically smoother. Many English terms thus produced are confusing. The defect of the word-for-word literal translation stems from the erroneous idea that a single character constitutes the basic element of meaning corresponding to the notion of "word" in English, and the meaning of a disyllabic or polysyllabic Chinese word is the simple addition of the constituent characters. Another big mistake is the negligence of the polysemy of Chinese characters. One or two English equivalents can by no means cover all the various meanings of a single character which is a polysemous monosyllabic word. Various examples were cited from this dictionary to illustrate the mistakes.

  1. Learning Strategies in Chinese ESL Learners' Acquisition of Spoken English

    Institute of Scientific and Technical Information of China (English)

    安阳阳

    2007-01-01

    As for Chinese ESL (English as a second language) learners, one of the major problems in English learning is their poor performance of spoken English. Among various factors that improve spoken English skills, it is believed, learning strategies play an important role in acquisition of oral English. Beginning with the learning purpose and style of spoken English, this paper discusses the application of socioaffecrive, cognitive and metacognitive learning strategies in Chinese ESL learners' acquisition of spoken English.

  2. When the Daffodat Flew to the Intergalactic Zoo: Off-Line Consolidation Is Critical for Word Learning from Stories

    Science.gov (United States)

    Henderson, Lisa; Devine, Katy; Weighall, Anna; Gaskell, Gareth

    2015-01-01

    Previous studies using direct forms of vocabulary instruction have shown that newly learned words are integrated with existing lexical knowledge only "after" off-line consolidation (as measured by competition between new and existing words during spoken word recognition). However, the bulk of vocabulary acquisition during childhood…

  3. When the Daffodat Flew to the Intergalactic Zoo: Off-Line Consolidation Is Critical for Word Learning from Stories

    Science.gov (United States)

    Henderson, Lisa; Devine, Katy; Weighall, Anna; Gaskell, Gareth

    2015-01-01

    Previous studies using direct forms of vocabulary instruction have shown that newly learned words are integrated with existing lexical knowledge only "after" off-line consolidation (as measured by competition between new and existing words during spoken word recognition). However, the bulk of vocabulary acquisition during childhood…

  4. Business Spoken English Learning Strategies for Chinese Enterprise Staff

    Institute of Scientific and Technical Information of China (English)

    Han Li

    2013-01-01

    This study addresses the issue of promoting effective Business Spoken English of Enterprise Staff in China.It aims to assess the assessment of spoken English learning methods and identify the difficulties of learning English oral expression concerned business area.It also provides strategies for enhancing Enterprise Staff’s level of Business Spoken English.

  5. How to expand the corpus of spoken English

    Institute of Scientific and Technical Information of China (English)

    张光华

    2015-01-01

    With the speeding up of economic globalization, English as a global language status is becoming more and more manifest.This article from the spoken corpus concept is introduced, combined with the students' oral English learning methods, guide the students in learning spoken English in different ways to find the effective way to expand the spoken language corpora.

  6. How to expand the corpus of spoken English

    Institute of Scientific and Technical Information of China (English)

    张光华

    2015-01-01

    With the speeding up of economic globalization, English as a global language status is becoming more and more manifest.This article from the spoken corpus concept is introduced, combined with the students’ oral English learning methods, guide the students in learning spoken English in different ways to find the effective way to expand the spoken language corpora.

  7. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  8. Duration of content and function words in oral discourse by speakers with fluent aphasia: Preliminary data

    Directory of Open Access Journals (Sweden)

    Tan Lee

    2014-04-01

    Words that had occurred ten times or more in the speech materials was arbitrarily categorized as ‘unique words’ that could more reliably reflect syllable duration. There were a total of 206 unique words (141 content and 65 function words in the aphasia speech materials and 253 unique words (187 content and 66 function in the normal materials, most of them were disyllabic or monosyllabic. A higher lexical diversity in the normal group, but similar number of different function words for both groups, was consistent with earlier findings of impaired lexical access in aphasia. Table 1 displays the average duration per syllable and per word for content and function words among the two speaker groups. Our study showed that word duration in aphasic speech was longer than that in control speech. This is in line with our earlier results of higher speaking rate in normal speech. While content words were longer than function words in the aphasic speech, the difference was not as significant as that in controls.

  9. Some Differences Between Written and Spoken English

    Institute of Scientific and Technical Information of China (English)

    张雁凌

    2001-01-01

    @@ Some of the main differences that will be observed regarding written and spoken English will be how the origins of English relate to the formality of writing and informality of speech. This will include how English is taught which contributes to how it is later used in practice.

  10. Research on Spoken Interaction in Finland.

    Science.gov (United States)

    Hakulinen, Auli; Sorjonen, Marja-Leena

    1993-01-01

    Topics addressed in this review include ethnology and traditional dialect study, philology, linguistic conversion analysis, and interaction within the social sciences. Finland's size affects these research activities and research on spoken interaction is shifting to group projects with a common focus. (Contains 68 references.) (JP)

  11. Towards Affordable Disclosure of Spoken Heritage Archives

    NARCIS (Netherlands)

    Ordelman, Roeland; Heeren, Willemijn; Huijbregts, Marijn; Jong, de Franciska; Hiemstra, Djoerd; Larson, M.; Fernie, K; Oomen, J

    2009-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken heritage archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, we at least want to p

  12. Processing speaker affect during spoken sentence comprehension

    NARCIS (Netherlands)

    van Leeuwen, A.R.; Quené, H.; van Berkum, J.J.A.

    2013-01-01

    Anne van Leeuwen Utrecht institute of Linguistics OTS, Utrecht University Processing speaker affect during spoken sentence comprehension We often smile (and frown) while we talk. Speakers use facial expression, posture and prosody to provide additional cues that signal speaker stance. Speaker stance

  13. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    Science.gov (United States)

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  14. Well Spoken: Teaching Speaking to All Students

    Science.gov (United States)

    Palmer, Erik

    2011-01-01

    All teachers at all grade levels in all subjects have speaking assignments for students, but many teachers believe they don't know how to teach speaking, and many even fear public speaking themselves. In his new book, "Well Spoken", veteran teacher and education consultant Erik Palmer shares the art of teaching speaking in any classroom. Teachers…

  15. Handbook for Spoken Mathematics: (Larry's Speakeasy).

    Science.gov (United States)

    Chang, Lawrence A.; And Others

    This handbook is directed toward those who have to deal with spoken mathematics, yet have insufficient background to know the correct verbal expression for the written symbolic one. It compiles consistent and well-defined ways of uttering mathematical expressions so listeners will receive clear, unambiguous, and well-pronounced representations.…

  16. Well Spoken: Teaching Speaking to All Students

    Science.gov (United States)

    Palmer, Erik

    2011-01-01

    All teachers at all grade levels in all subjects have speaking assignments for students, but many teachers believe they don't know how to teach speaking, and many even fear public speaking themselves. In his new book, "Well Spoken", veteran teacher and education consultant Erik Palmer shares the art of teaching speaking in any classroom. Teachers…

  17. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    Science.gov (United States)

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  18. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    Science.gov (United States)

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL GRAMMATICAL…

  19. On the Usability of Spoken Dialogue Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Bo

    banking spoken dialogue system. It comprises more than 700 transcribed dialogues by 310 users. A number of objective (performance) measures are derived from the corpus. The system’s learnability is analysed through the turn-taking strategies and it is shown that users are capable of taking the initiative...

  20. How Does Word Length Evolve in Written Chinese?

    Science.gov (United States)

    Chen, Heng; Liang, Junying; Liu, Haitao

    2015-01-01

    We demonstrate a substantial evidence that the word length can be an essential lexical structural feature for word evolution in written Chinese. The data used in this study are diachronic Chinese short narrative texts with a time span of over 2000-years. We show that the increase of word length is an essential regularity in word evolution. On the one hand, word frequency is found to depend on word length, and their relation is in line with the Power law function y = ax-b. On the other hand, our deeper analyses show that the increase of word length results in the simplification in characters for balance in written Chinese. Moreover, the correspondence between written and spoken Chinese is discussed. We conclude that the disyllabic trend may account for the increase of word length, and its impacts can be explained in "the principle of least effort". PMID:26384237

  1. Modality differences between written and spoken story retelling in healthy older adults

    Directory of Open Access Journals (Sweden)

    Jessica Ann Obermeyer

    2015-04-01

    Methods: Ten native English speaking healthy elderly participants between the ages of 50 and 80 were recruited. Exclusionary criteria included neurological disease/injury, history of learning disability, uncorrected hearing or vision impairment, history of drug/alcohol abuse and presence of cognitive decline (based on Cognitive Linguistic Quick Test. Spoken and written discourse was analyzed for micro linguistic measures including total words, percent correct information units (CIUs; Nicholas & Brookshire, 1993 and percent complete utterances (CUs; Edmonds, et al. 2009. CIUs measure relevant and informative words while CUs focus at the sentence level and measure whether a relevant subject and verb and object (if appropriate are present. Results: Analysis was completed using Wilcoxon Rank Sum Test due to small sample size. Preliminary results revealed that healthy elderly people produced significantly more words in spoken retellings than written retellings (p=.000; however, this measure contrasted with %CIUs and %CUs with participants producing significantly higher %CIUs (p=.000 and %CUs (p=.000 in written story retellings than in spoken story retellings. Conclusion: These findings indicate that written retellings, while shorter, contained higher accuracy at both a word (CIU and sentence (CU level. This observation could be related to the ability to revise written text and therefore make it more concise, whereas the nature of speech results in more embellishment and “thinking out loud,” such as comments about the task, associated observations about the story, etc. We plan to run more participants and conduct a main concepts analysis (before conference time to gain more insight into modality differences and implications.

  2. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  3. The Relationship between Spoken Language and Interpretation and its teaching implications

    Institute of Scientific and Technical Information of China (English)

    唐小玲

    2009-01-01

    Along with the increasing progress of globalization and the advance of information technology, countries all over the world communicate with each other more and more. Under this background, there is a growing market demand for multi-level interpreters and interpretation has become a buzz word However, interpreting research in China started a bit late Though there are a large number of researches focusing on the principles of interpreting, the quality of interpreters or whatever the relationship between spoken language and interpretation is seldom observed. This essay not only studies the relationship between spoken language and interpretation, but also provides some teaching implicatious, both of which will be meaningful to the development of interpreting.

  4. AVIR: a spoken document retrieval system in e-learning environment

    Science.gov (United States)

    Gagliardi, Isabella; Padula, Marco; Pagliarulo, Patrizia; Aliprandi, Bruno

    2006-01-01

    In this paper we present AVIR (Audio & Video Information Retrieval), a project of CNR (Italian National Research Council) - ITC to develop a tools to support an information system for distance e-learning. AVIR has been designed to store, index, and classify audio and video lessons to make them available to students and other interested users. The core of AVIR is a SDR (Spoken Document Retrieval) system which automatically transcribes the spoken documents into texts and indexes them through dictionaries appropriately created. During the fruition on-line, the user can formulate his queries searching documents by date, professor, title of the lesson or selecting one or more specific words. The results are presented to the users: in case of video lessons the preview of the first frames is shown. Moreover, slides of the lessons and associate papers can be retrieved.

  5. Word order variation and foregrounding of complement clauses

    DEFF Research Database (Denmark)

    Christensen, Tanya Karoli; Jensen, Torben Juel

    2015-01-01

    Through mixed models analyses of complement clauses in a corpus of spoken Danish we examine the role of sentence adverbials in relation to a word order distinction in Scandinavian signalled by the relative position of sentence adverbials and finite verb (V>Adv vs. Adv>V). The type of sentence...

  6. Towards Understanding Spontaneous Speech Word Accuracy vs. Concept Accuracy

    CERN Document Server

    Boros, M; Gallwitz, F; Goerz, G; Hanrieder, G; Niemann, H

    1996-01-01

    In this paper we describe an approach to automatic evaluation of both the speech recognition and understanding capabilities of a spoken dialogue system for train time table information. We use word accuracy for recognition and concept accuracy for understanding performance judgement. Both measures are calculated by comparing these modules' output with a correct reference answer. We report evaluation results for a spontaneous speech corpus with about 10000 utterances. We observed a nearly linear relationship between word accuracy and concept accuracy.

  7. Word classes

    DEFF Research Database (Denmark)

    Rijkhoff, Jan

    2007-01-01

    This article provides an overview of recent literature and research on word classes, focusing in particular on typological approaches to word classification. The cross-linguistic classification of word class systems (or parts-of-speech systems) presented in this article is based on statements found...... a parts-of-speech system that includes the categories Verb, Noun, Adjective and Adverb, other languages may use only a subset of these four lexical categories. Furthermore, quite a few languages have a major word class whose members cannot be classified in terms of the categories Verb – Noun – Adjective...

  8. Vocabulary plus Technology: An After-Reading Approach to Develop Deep Word Learning

    Science.gov (United States)

    Wolsey, Thomas DeVere; Smetana, Linda; Grisham, Dana L.

    2015-01-01

    Students who can use a term conversantly in academic environments know how to use it precisely in their writing and in their interactions with others; they can be said to deeply know, not just the word term in alphabetic or spoken forms, but the connections to ideas the term embodies. When students are intrigued by words and ideas, they want to…

  9. Vocabulary plus Technology: An After-Reading Approach to Develop Deep Word Learning

    Science.gov (United States)

    Wolsey, Thomas DeVere; Smetana, Linda; Grisham, Dana L.

    2015-01-01

    Students who can use a term conversantly in academic environments know how to use it precisely in their writing and in their interactions with others; they can be said to deeply know, not just the word term in alphabetic or spoken forms, but the connections to ideas the term embodies. When students are intrigued by words and ideas, they want to…

  10. New Names for Known Things: On the Association of Novel Word Forms with Existing Semantic Information

    Science.gov (United States)

    Dobel, Christian; Junghofer, Markus; Breitenstein, Caterina; Klauke, Benedikt; Knecht, Stefan; Pantev, Christo; Zwitserlood, Pienie

    2010-01-01

    The plasticity of the adult memory network for integrating novel word forms (lexemes) was investigated with whole-head magnetoencephalography (MEG). We showed that spoken word forms of an (artificial) foreign language are integrated rapidly and successfully into existing lexical and conceptual memory networks. The new lexemes were learned in an…

  11. Speech Perception Engages a General Timer: Evidence from a Divided Attention Word Identification Task

    Science.gov (United States)

    Casini, Laurence; Burle, Boris; Nguyen, Noel

    2009-01-01

    Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…

  12. Lexical and Child-Related Factors in Word Variability and Accuracy in Infants

    Science.gov (United States)

    Macrae, Toby

    2013-01-01

    The present study investigated the effects of lexical age of acquisition (AoA), phonological complexity, age and expressive vocabulary on spoken word variability and accuracy in typically developing infants, aged 1;9-3;1. It was hypothesized that later-acquired words and those with more complex speech sounds would be produced more variably and…

  13. The locus of word frequency effects in skilled spelling-to-dictation.

    Science.gov (United States)

    Chua, Shi Min; Liow, Susan J Rickard

    2014-01-01

    In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.

  14. Thinking outside the box when reading aloud: Between (localist) module connection strength as a source of word frequency effects.

    Science.gov (United States)

    Besner, Derek; Risko, Evan F

    2016-10-01

    The frequency with which words appear in print is a powerful predictor of the time to read monosyllabic words aloud, and consequently all models of reading aloud provide an explanation for this effect. The entire class of localist accounts assumes that the effect of word frequency arises because the mental lexicon is organized around frequency of occurrence (the action is inside the lexical boxes). We propose instead that the frequency of occurrence effect is better understood in terms of the hypothesis that the strength of between module connections varies as a function of word frequency. Findings from 3 different lines of investigation (experimental and computational) are difficult to understand in terms of the "within lexicon" account, but are consistent with the strength of between-module connections account. (PsycINFO Database Record

  15. Action and Object Word Writing in a Case of Bilingual Aphasia

    Directory of Open Access Journals (Sweden)

    Maria Kambanaros

    2012-01-01

    Full Text Available We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.

  16. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    Science.gov (United States)

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  17. Phonological Analysis of University Students Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina Karjo

    2011-03-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.

  18. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  19. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...... window focused over the part which most likely contains an answer to the query. The two systems are integrated into a full spoken query answering system. The prototype can answer queries and questions within the chosen football (soccer) test domain, but the system has the flexibility for being ported...

  20. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  1. Towards Environment-Independent Spoken Language Systems

    Science.gov (United States)

    1990-01-01

    Towards Environment-Independent Spoken Language Systems Alejandro Acero and Richard M. Stern Department of Electrical and Computer Engineering...applications of spectral subtraction and spectral equaliza- tion for speech recognition systems include the work of Van Compemolle [5] and Stem and Acero [12... Acero and Stem [1] proposed an approach to environment normalization in the cepstral domain, going beyond the noise stripping problem. In this paper we

  2. Towards Affordable Disclosure of Spoken Heritage Archives

    OpenAIRE

    Ordelman, Roeland; Heeren, Willemijn; Huijbregts, Marijn; de Jong, Franciska; Hiemstra, Djoerd

    2009-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken heritage archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, we at least want to provide search at different levels and a flexible way of presenting results. Strategies for automatic annotation based on speech recognition - supporting e.g., within-document search - are outlined ...

  3. Indexing spoken audio by LSA and SOMs

    OpenAIRE

    2000-01-01

    This paper presents an indexing system for spoken audio documents. The framework is indexing and retrieval of broadcast news. The proposed indexing system applies latent semantic analysis (LSA) and self-organizing maps (SOM) to map the documents into a semantic vector space and to display the semantic structures of the document collection. The SOM is also used to enhance the indexing of the documents that are difficult to decode. Relevant index terms and suitable index weights are computed by...

  4. Some Thoughts on Teaching Business Spoken English

    Institute of Scientific and Technical Information of China (English)

    林雅

    2013-01-01

    The difficulty of teaching Business Spoken English involves the designing an applicable course plan in a structured and coherent manner, motivating and enabling students to acquire both business knowledge and language skills effectively. This essay discusses the needs of knowing who are the students and emphasizes the importance of learning with interest through offering a di⁃versified learning materials and teaching equipments. Additionally, a proper evaluation plan should be developed to assess the stu⁃dents’overall performance and progress.

  5. Recognition memory for words and faces in the very old.

    Science.gov (United States)

    Diesfeldt, H; Vink, M

    1989-09-01

    The assessment of very elderly people is hindered by a scarcity of normative and reliability data for non-verbal memory tests. We tested the suitability of Warrington's Recognition Memory Test (RMT) for use with the elderly. The RMT consists of verbal (Recognition Memory for Words, RMW) and non-verbal (Recognition Memory for Faces, RMF) subtests. The facial recognition test was used in the standard format and a Dutch-language version of the word recognition test was developed using low frequency (10 or less/million) monosyllabic words. Eighty-nine subjects, varying in age from 69 to 93, were tested with the RMF. Means and SD are provided for three age groups (69-79, 80-84 and 85-93). Forty-five consecutive subjects were tested both with the RMW and the RMF. Recognition memory for words was better than recognition memory for faces in this sample. Moderate correlations (0.30-0.48) were found between RMT and WAIS Vocabulary and Raven's Coloured Progressive Matrices scores. Warrington's RMT was well tolerated, even by very elderly adults. The standardization data for the elderly over 70 add to the usefulness of this test of verbal and non-verbal episodic memory.

  6. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  7. Brain-to-text: decoding spoken phrases from phone representations in the brain.

    Science.gov (United States)

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.

  8. [Words that won't fade off in the wind: identity and diagnosis in psychiatry].

    Science.gov (United States)

    Levín, Santiago A

    2013-01-01

    The main focus of this paper is to analyze the role of the word pronounced by members of the health staff as it is constitutive of identity in the patient recepting the word. How is identity constructed? How does the word spoken by a significant Other impinge on this process? In particular, what about the influence of words denoting medical diagnoses? Regarding such queries we also look, on a preliminary basis, at each of the two main currents in Western medicine (biomedicine and medical anthropology), to find out if and how it addresses the relation between the word as an element of identity and the same word as a therapeutic tool.

  9. Symbolic gestures and spoken language are processed by a common neural system.

    Science.gov (United States)

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-08

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.

  10. Word Learning Deficits in Children With Dyslexia.

    Science.gov (United States)

    Alt, Mary; Hogan, Tiffany; Green, Samuel; Gray, Shelley; Cabbage, Kathryn; Cowan, Nelson

    2017-04-14

    The purpose of this study is to investigate word learning in children with dyslexia to ascertain their strengths and weaknesses during the configuration stage of word learning. Children with typical development (N = 116) and dyslexia (N = 68) participated in computer-based word learning games that assessed word learning in 4 sets of games that manipulated phonological or visuospatial demands. All children were monolingual English-speaking 2nd graders without oral language impairment. The word learning games measured children's ability to link novel names with novel objects, to make decisions about the accuracy of those names and objects, to recognize the semantic features of the objects, and to produce the names of the novel words. Accuracy data were analyzed using analyses of covariance with nonverbal intelligence scores as a covariate. Word learning deficits were evident for children with dyslexia across every type of manipulation and on 3 of 5 tasks, but not for every combination of task/manipulation. Deficits were more common when task demands taxed phonology. Visuospatial manipulations led to both disadvantages and advantages for children with dyslexia. Children with dyslexia evidence spoken word learning deficits, but their performance is highly dependent on manipulations and task demand, suggesting a processing trade-off between visuospatial and phonological demands.

  11. Orthographic consistency and word-frequency effects in auditory word recognition: New evidence from lexical decision and rime detection

    Directory of Open Access Journals (Sweden)

    Ana ePetrova

    2011-10-01

    Full Text Available Many studies have repeatedly shown an orthographic consistency effect in the auditory lexical decision task. Words with phonological rimes that could be spelled in multiple ways (i.e., inconsistent words typically produce longer auditory lexical decision latencies and more errors than do words with rimes that could be spelled in only one way (i.e., consistent words. These results have been extended to different languages and tasks, suggesting that the effect is quite general and robust. Despite this growing body of evidence, some psycholinguists believe that orthographic effects on spoken language are exclusively strategic, postlexical or restricted to peculiar (low-frequency words. In the present study, we manipulated consistency and word frequency orthogonally in order to explore whether the orthographic consistency effect extends to high-frequency words. Two different tasks were used: lexical decision and rime detection. Both tasks produced reliable consistency effects for both low- and high-frequency words. Furthermore, in Experiment 1 (lexical decision, an interaction revealed a stronger consistency effect for low-frequency words than for high-frequency words, as initially predicted by Ziegler and Ferrand (1998, whereas no interaction was found in Experiment 2 (rime detection. Our results extend previous findings by showing that the orthographic consistency effect is obtained not only for low-frequency words but also for high-frequency words. Furthermore, these effects were also obtained in a rime detection task, which does not require the explicit processing of orthographic structure. Globally, our results suggest that literacy changes the way people process spoken words, even for frequent words.

  12. Rapid gains in segmenting fluent speech when words match the rhythmic unit: evidence from infants acquiring syllable-timed languages

    Directory of Open Access Journals (Sweden)

    Laura eBosch

    2013-03-01

    Full Text Available The ability to extract word-forms from sentential contexts represents an initial step in infants’ process towards lexical acquisition. By age 6 months the ability is just emerging and evidence of it is restricted to certain testing conditions. Most research has been developed with infants acquiring stress-timed languages (English, but also German and Dutch whose rhythmic unit is not the syllable. Data from infants acquiring syllable-timed languages are still scarce and limited to French (European and Canadian, partially revealing some discrepancies with English regarding the age at which word segmentation ability emerges. Research reported here aims at broadening this cross-linguistic perspective by presenting first data on the early ability to segment monosyllabic word-forms by infants acquiring Spanish and Catalan. Three different language groups (two monolingual and one bilingual and two different age groups (8- and 6-month-old infants were tested using natural language and a modified version of the HPP with familiarization to passages and testing on words. Results revealed positive evidence of word segmentation in all groups at both ages, but critically, the pattern of preference differed by age. A novelty preference was obtained in the older groups, while the expected familiarity preference was only found at the younger age tested, suggesting more advanced segmentation ability with an increase in age. These results offer first evidence of an early ability for monosyllabic word segmentation in infants acquiring syllable-timed languages such as Spanish or Catalan, not previously described in the literature. Data show no impact of bilingual exposure in the emergence of this ability and results suggest rapid gains in early segmentation for words that match the rhythm unit of the native language.

  13. Rapid gains in segmenting fluent speech when words match the rhythmic unit: evidence from infants acquiring syllable-timed languages.

    Science.gov (United States)

    Bosch, Laura; Figueras, Melània; Teixidó, Maria; Ramon-Casas, Marta

    2013-01-01

    The ability to extract word-forms from sentential contexts represents an initial step in infants' process toward lexical acquisition. By age 6 months the ability is just emerging and evidence of it is restricted to certain testing conditions. Most research has been developed with infants acquiring stress-timed languages (English, but also German and Dutch) whose rhythmic unit is not the syllable. Data from infants acquiring syllable-timed languages are still scarce and limited to French (European and Canadian), partially revealing some discrepancies with English regarding the age at which word segmentation ability emerges. Research reported here aims at broadening this cross-linguistic perspective by presenting first data on the early ability to segment monosyllabic word-forms by infants acquiring Spanish and Catalan. Three different language groups (two monolingual and one bilingual) and two different age groups (8- and 6-month-old infants) were tested using natural language and a modified version of the HPP with familiarization to passages and testing on words. Results revealed positive evidence of word segmentation in all groups at both ages, but critically, the pattern of preference differed by age. A novelty preference was obtained in the older groups, while the expected familiarity preference was only found at the younger age tested, suggesting more advanced segmentation ability with an increase in age. These results offer first evidence of an early ability for monosyllabic word segmentation in infants acquiring syllable-timed languages such as Spanish or Catalan, not previously described in the literature. Data show no impact of bilingual exposure in the emergence of this ability and results suggest rapid gains in early segmentation for words that match the rhythm unit of the native language.

  14. Rapid gains in segmenting fluent speech when words match the rhythmic unit: evidence from infants acquiring syllable-timed languages

    Science.gov (United States)

    Bosch, Laura; Figueras, Melània; Teixidó, Maria; Ramon-Casas, Marta

    2013-01-01

    The ability to extract word-forms from sentential contexts represents an initial step in infants' process toward lexical acquisition. By age 6 months the ability is just emerging and evidence of it is restricted to certain testing conditions. Most research has been developed with infants acquiring stress-timed languages (English, but also German and Dutch) whose rhythmic unit is not the syllable. Data from infants acquiring syllable-timed languages are still scarce and limited to French (European and Canadian), partially revealing some discrepancies with English regarding the age at which word segmentation ability emerges. Research reported here aims at broadening this cross-linguistic perspective by presenting first data on the early ability to segment monosyllabic word-forms by infants acquiring Spanish and Catalan. Three different language groups (two monolingual and one bilingual) and two different age groups (8- and 6-month-old infants) were tested using natural language and a modified version of the HPP with familiarization to passages and testing on words. Results revealed positive evidence of word segmentation in all groups at both ages, but critically, the pattern of preference differed by age. A novelty preference was obtained in the older groups, while the expected familiarity preference was only found at the younger age tested, suggesting more advanced segmentation ability with an increase in age. These results offer first evidence of an early ability for monosyllabic word segmentation in infants acquiring syllable-timed languages such as Spanish or Catalan, not previously described in the literature. Data show no impact of bilingual exposure in the emergence of this ability and results suggest rapid gains in early segmentation for words that match the rhythm unit of the native language. PMID:23467921

  15. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, A C; Toscano, E; Botting, N; Marshall, C R; Atkinson, J R; Denmark, T; Herman, R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills.

  16. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language.

    Science.gov (United States)

    Williams, Joshua T; Newman, Sharlene D

    2017-02-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.

  17. fMRI congruous word repetition effects reflect memory variability in normal elderly

    OpenAIRE

    Olichney, John M.; Taylor, Jason R.; Hillert, Dieter G.; Chan, Shiao-hui; Salmon, David P.; Gatherwright, James; Iragui, Vicente J.; Kutas, Marta

    2008-01-01

    Neural circuits mediating repetition effect for semantically congruous words on functional MRI were investigated in seventeen normal elderly (mean age = 70). Participants determined if written words were semantically congruent (50% probability) with spoken statements. Subsequent cued-recall revealed robust explicit memory only for congruous items (83% versus 8% for incongruous). Event-related BOLD responses to New > Old congruous words were found in the left > right cingulate and fusiform gyr...

  18. Periodic words connected with the Fibonacci words

    Directory of Open Access Journals (Sweden)

    G. M. Barabash

    2016-06-01

    Full Text Available In this paper we introduce two families of periodic words (FLP-words of type 1 and FLP-words of type 2 that are connected with the Fibonacci words and investigated their properties.

  19. Effects of Word Frequency and Transitional Probability on Word Reading Durations of Younger and Older Speakers.

    Science.gov (United States)

    Moers, Cornelia; Meyer, Antje; Janse, Esther

    2017-06-01

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.

  20. A Robust System for Natural Spoken Dialogue

    CERN Document Server

    Allen, J F; Ringger, E K; Sikorski, T; Allen, James F.; Miller, Bradford W.; Ringger, Eric K.; Sikorski, Teresa

    1996-01-01

    This paper describes a system that leads us to believe in the feasibility of constructing natural spoken dialogue systems in task-oriented domains. It specifically addresses the issue of robust interpretation of speech in the presence of recognition errors. Robustness is achieved by a combination of statistical error post-correction, syntactically- and semantically-driven robust parsing, and extensive use of the dialogue context. We present an evaluation of the system using time-to-completion and the quality of the final solution that suggests that most native speakers of English can use the system successfully with virtually no training.

  1. Building Ontologies to Understand Spoken Tunisian Dialect

    CERN Document Server

    Graja, Marwa; Belguith, Lamia Hadrich

    2011-01-01

    This paper presents a method to understand spoken Tunisian dialect based on lexical semantic. This method takes into account the specificity of the Tunisian dialect which has no linguistic processing tools. This method is ontology-based which allows exploiting the ontological concepts for semantic annotation and ontological relations for speech interpretation. This combination increases the rate of comprehension and limits the dependence on linguistic resources. This paper also details the process of building the ontology used for annotation and interpretation of Tunisian dialect in the context of speech understanding in dialogue systems for restricted domain.

  2. The time-based word length effect and stimulus set specificity.

    Science.gov (United States)

    Neath, Ian; Bireta, Tamra J; Surprenant, Aimée M

    2003-06-01

    The word length effect is the finding that short items are remembered better than long items on immediate serial recall tests. The time-based word length effect refers to this finding when the lists comprise items that vary only in pronunciation time. Three experiments compared recall of three different sets of disyllabic words that differed systematically only in spoken duration. One set showed a word length effect, one set showed no effect of word length, and the third showed a reverse word length effect, with long words recalled better than short. A new fourth set of words was created, and it also failed to yield a time-based word length effect. Because all four experiments used the same methodologyand varied only the stimulus sets, it is argued that the time-based word length effect is not robust and as such poses problems for models based on the phonological loop.

  3. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words.

    Science.gov (United States)

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H; Fitzgibbons, Peter J; Cohen, Julie I

    2015-02-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech.

  4. The impact of music on learning and consolidation of novel words.

    Science.gov (United States)

    Tamminen, Jakke; Rastle, Kathleen; Darby, Jess; Lucas, Rebecca; Williamson, Victoria J

    2017-01-01

    Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.

  5. Word prediction

    Energy Technology Data Exchange (ETDEWEB)

    Rumelhart, D.E.; Skokowski, P.G.; Martin, B.O.

    1995-05-01

    In this project we have developed a language model based on Artificial Neural Networks (ANNs) for use in conjunction with automatic textual search or speech recognition systems. The model can be trained on large corpora of text to produce probability estimates that would improve the ability of systems to identify words in a sentence given partial contextual information. The model uses a gradient-descent learning procedure to develop a metric of similarity among terms in a corpus, based on context. Using lexical categories based on this metric, a network can then be trained to do serial word probability estimation. Such a metric can also be used to improve the performance of topic-based search by allowing retrieval of information that is related to desired topics even if no obvious set of key words unites all the retrieved items.

  6. Learning words

    DEFF Research Database (Denmark)

    Jaswal, Vikram K.; Hansen, Mikkel

    2006-01-01

    Children tend to infer that when a speaker uses a new label, the label refers to an unlabeled object rather than one they already know the label for. Does this inference reflect a default assumption that words are mutually exclusive? Or does it instead reflect the result of a pragmatic reasoning...... process about what the speaker intended? In two studies, we distinguish between these possibilities. Preschoolers watched as a speaker pointed toward (Study 1) or looked at (Study 2) a familiar object while requesting the referent for a new word (e.g. 'Can you give me the blicket?'). In both studies......, despite the speaker's unambiguous behavioral cue indicating an intent to refer to a familiar object, children inferred that the novel label referred to an unfamiliar object. These results suggest that children expect words to be mutually exclusive even when a speaker provides some kinds of pragmatic...

  7. Spoken dialogue understanding and local context

    Science.gov (United States)

    Heeman, Peter A.

    1994-07-01

    Spoken dialogue poses many new problems to researchers in the field of computational linguistics. In particular, conversants must detect and correct speech repairs, segment a turn into individual utterances, and identify discourse markers. These problems are interrelated. For instance, there are some lexical items whose role in an utterance can be ambiguous: they can act as discourse markers, signal as speech repair, or even be part of the content of an utterance unit. So, these issues must be addressed together. The resolution of these problems will allow a basic understanding of how a speaker's turn can be broken down into individual contributions to the dialogue. We propose that this resolution must be and can be done using local context. They do not require a full understanding of the dialogue so far, nor, in most cases, a deep understanding of the current turn. Resolving these issues locally also means they can be resolved for the most part before later processing, and so will make a natural language understanding system more robust and able to deal with the unconstrained nature of spoken dialogue.

  8. Recurrent Word Combinations in EAP Test-Taker Writing: Differences between High- and Low-Proficiency Levels

    Science.gov (United States)

    Appel, Randy; Wood, David

    2016-01-01

    The correct use of frequently occurring word combinations represents an important part of language proficiency in spoken and written discourse. This study investigates the use of English-language recurrent word combinations in low-level and high-level L2 English academic essays sourced from the Canadian Academic English Language (CAEL) assessment.…

  9. Influence of Psychological Factors on the Improvement of Spoken English

    Institute of Scientific and Technical Information of China (English)

    董宁

    2013-01-01

      From learner's innermost feelings,the author attempts to elaborate the influences of psychological factors on improving the spoken language. The study of spoken English is a very complex process, which is affected easily by learner's linguistic environment and character. We can draw a conclusion that psychological factors are an important problem and cannot be neglected.

  10. Extracting Information from Spoken User Input. A Machine Learning Approach

    NARCIS (Netherlands)

    Lendvai, P.K.

    2004-01-01

    We propose a module that performs automatic analysis of user input in spoken dialogue systems using machine learning algorithms. The input to the module is material received from the speech recogniser and the dialogue manager of the spoken dialogue system, the output is a four-level

  11. Spoken Language Research and ELT: Where Are We Now?

    Science.gov (United States)

    Timmis, Ivor

    2012-01-01

    This article examines the relationship between spoken language research and ELT practice over the last 20 years. The first part is retrospective. It seeks first to capture the general tenor of recent spoken research findings through illustrative examples. The article then considers the sociocultural issues that arose when the relevance of these…

  12. Spoken Grammar: Where Are We and Where Are We Going?

    Science.gov (United States)

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  13. An Investigation into the Development of a Spoken English Test

    Institute of Scientific and Technical Information of China (English)

    付期棉

    2014-01-01

    This paper investigates the development of a spoken English test. The nature of speaking test and its design principles are first reviewed. Then the procedure of the test development is elaborated in detail,namely,design stage,construction stage and try out stage. The challenges facing the development of spoken test are finally discussed.

  14. Learning words

    DEFF Research Database (Denmark)

    Jaswal, Vikram K.; Hansen, Mikkel

    2006-01-01

    Children tend to infer that when a speaker uses a new label, the label refers to an unlabeled object rather than one they already know the label for. Does this inference reflect a default assumption that words are mutually exclusive? Or does it instead reflect the result of a pragmatic reasoning ...

  15. Spoken language can have its impact on the respiratory passage.

    Science.gov (United States)

    D'Souza, Jyothi M P; D'Souza, Deepak Herald

    2010-07-01

    Spoken language, due its chronic impact, could be looked upon as one of the factors for its role, either in prevention or causation of respiratory illnesses. There will be variations in articulatory-aerodynamics and respiratory system dynamics among the spoken languages. Geographic variation of disease patterns and uncertain etiologies of some respiratory illnesses, which occur due to insult to the mucosal barrier or the defense mechanism of the respiratory passage, may be explained by the hypothesis of unhealthy language. Habituation to a particular spoken language could mask the symptoms of phonotrauma. Other respiratory illnesses could initiate from the phonotrauma by spoken language. There exist lacunae in the research of languages. Finding out the healthy language could mean relative freedom from respiratory illnesses. Healthy spoken language could relieve the stress on vocal cords and improve the defense mechanism of the respiratory passage. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. Interaction Hypothesis in Second Language Acquisition and Spoken English Teaching

    Institute of Scientific and Technical Information of China (English)

    王佳佳

    2016-01-01

    Spoken English is one of the most practical skill that students need to obtain. It is an important link in English Teaching. However, there exist many problems in Spoken English Teaching in China and one of the most serious problems is that it lacks sufficient practice. According to the interaction hypothesis, second language acquisition occurs when learners interact in conversation with native speakers. So, interaction also plays a crucial role in Spoken English Teaching. Based on the interaction hypothesis, this paper presents some new insights for improving Spoken English Teaching and discusses its enlightenment in Spoken English Teaching Classroom. And this thesis also gives a detailed explanation of the hypothesis’s importance in SLA.

  17. Presentation video retrieval using automatically recovered slide and spoken text

    Science.gov (United States)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  18. On Language Anxiety and Countermeasures in University Spoken Language Teaching

    Institute of Scientific and Technical Information of China (English)

    王芳

    2013-01-01

    Language anxiety is the anxiety or disconcerted feeling felt by language learners when they need to express in foreign languages or second foreign language. Anxiety is very important emotional barrier in language learning, and it has important im⁃pact on language learning, especially on spoken language learning. The general lack of spoken language capacity in university stu⁃dents is an incontrovertible fact, and language anxiety is one of the important reasons that cause the imperfect effect of spoken lan⁃guage teaching. This article generally talks about the main reasons for language anxiety, which include the barrier by self-esteem, risk-taking, competitiveness to spoken language learning of university students. This article also discusses a series of countermea⁃sure to ease the language anxiety, so as to reduce the anxiety effectively to promote the teaching of university spoken English.

  19. SKOPE A connectionist/symbolic architecture of spoken Korean processing

    CERN Document Server

    Lee, G; Lee, Geunbae; Lee, Jong-Hyeok

    1995-01-01

    Spoken language processing requires speech and natural language integration. Moreover, spoken Korean calls for unique processing methodology due to its linguistic characteristics. This paper presents SKOPE, a connectionist/symbolic spoken Korean processing engine, which emphasizes that: 1) connectionist and symbolic techniques must be selectively applied according to their relative strength and weakness, and 2) the linguistic characteristics of Korean must be fully considered for phoneme recognition, speech and language integration, and morphological/syntactic processing. The design and implementation of SKOPE demonstrates how connectionist/symbolic hybrid architectures can be constructed for spoken agglutinative language processing. Also SKOPE presents many novel ideas for speech and language processing. The phoneme recognition, morphological analysis, and syntactic analysis experiments show that SKOPE is a viable approach for the spoken Korean processing.

  20. Respirator Speech Intelligibility Testing with an Experienced Speaker

    Science.gov (United States)

    2015-05-01

    intelligibility during mask wear using the Modified Rhyme Test (MRT).2 The MRT consists of 50 six-word lists of monosyllabic English words, most...having three sounds in a consonant- vowel -consonant sequence. The MRT requires listeners to correctly identify single-syllable words spoken by a...years) gave written informed consent prior to participation in the study. All volunteers were native speakers of American English and had normal

  1. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Word wheels

    CERN Document Server

    Clark, Kathryn

    2013-01-01

    Targeting the specific problems learners have with language structure, these multi-sensory exercises appeal to all age groups including adults. Exercises use sight, sound and touch and are also suitable for English as an Additional Lanaguage and Basic Skills students.Word Wheels includes off-the-shelf resources including lesson plans and photocopiable worksheets, an interactive CD with practice exercises, and support material for the busy teacher or non-specialist staff, as well as homework activities.

  3. Probabilistic Aspects in Spoken Document Retrieval

    Directory of Open Access Journals (Sweden)

    Macherey Wolfgang

    2003-01-01

    Full Text Available Accessing information in multimedia databases encompasses a wide range of applications in which spoken document retrieval (SDR plays an important role. In SDR, a set of automatically transcribed speech documents constitutes the files for retrieval, to which a user may address a request in natural language. This paper deals with two probabilistic aspects in SDR. The first part investigates the effect of recognition errors on retrieval performance and inquires the question of why recognition errors have only a little effect on the retrieval performance. In the second part, we present a new probabilistic approach to SDR that is based on interpolations between document representations. Experiments performed on the TREC-7 and TREC-8 SDR task show comparable or even better results for the new proposed method than other advanced heuristic and probabilistic retrieval metrics.

  4. Parsing of Spoken Language under Time Constraints

    CERN Document Server

    Menzel, W

    1994-01-01

    Spoken language applications in natural dialogue settings place serious requirements on the choice of processing architecture. Especially under adverse phonetic and acoustic conditions parsing procedures have to be developed which do not only analyse the incoming speech in a time-synchroneous and incremental manner, but which are able to schedule their resources according to the varying conditions of the recognition process. Depending on the actual degree of local ambiguity the parser has to select among the available constraints in order to narrow down the search space with as little effort as possible. A parsing approach based on constraint satisfaction techniques is discussed. It provides important characteristics of the desired real-time behaviour and attempts to mimic some of the attention focussing capabilities of the human speech comprehension mechanism.

  5. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  6. Talk About Mouth Speculums: Collocational Competence and Spoken Fluency in Non-Native English-Speaking University Lecturers

    DEFF Research Database (Denmark)

    Westbrook, Pete

    Despite the large body of research into formulaic language and fluency, there seems to be a lack of empirical evidence for how collocations, often considered a subset of formulaic language, might impact on fluency. To address this problem, this dissertation examined to what extent correlations...... might exist between overall language proficiency, collocational competence and spoken fluency in non-native English-speaking university lecturers. The data came from 15 20-minute mini-lectures recorded between 2009 and 2011 for an English oral proficiency test for lecturers employed at the University...... of Copenhagen. The 15 lecturers came from three departments: Large Animal Science, Information Technology and Mathematics. Test examiners’ global and fluency scores from the test were analysed against collocational competence, measured as collocations produced per thousand words spoken, and three temporal...

  7. Hidden Markov model-based approach for generation of Pitman shorthand language symbols for consonants and vowels from spoken English

    Indian Academy of Sciences (India)

    G Hemantha Kumar; M Ravishankar; P Nagabushan; Basavaraj S Anami

    2006-06-01

    Pitman shorthand language (PSL) is a widely practised medium for transcribing/recording speech to text (StT) in English. This recording medium continues to exist in spite of considerable development in speech processing systems (SPS), because of its ability to record spoken/dictated text at high speeds of more than 120 words per minute. Hence, scope exists for exploiting this potential of PSL in present SPS. In this paper, an approach for feature extraction using Mel frequency cepstral coefficients (MFCC) and classification using hidden Markov models (HMM) for generating strokes comprising consonants and vowels (CV) in the process of production of Pitman shorthand language from spoken English is proposed. The proposed method is tested on a large number of samples, drawn from different speakers and the results are encouraging. The work is useful in total automation of PSL processing.

  8. Character-based Recognition of Simple Word Gesture

    Directory of Open Access Journals (Sweden)

    Paulus Insap Santosa

    2013-11-01

    Full Text Available People with normal senses use spoken language to communicate with others. This method cannot be used by those with hearing and speech impaired. These two groups of people will have difficulty when they try to communicate to each other using their own language. Sign language is not easy to learn, as there are various sign languages, and not many tutors are available. This research focused on a simple word recognition gesture based on characters that form a word to be recognized. The method used for character recognition was the nearest neighbour method. This method identified different fingers using the different markers attached to each finger. Testing a simple word gesture recognition is done by providing a series of characters that make up the intended simple word. The accuracy of a simple word gesture recognition depended upon the accuracy of recognition of each character.

  9. Selective and invariant neural responses to spoken and written narratives.

    Science.gov (United States)

    Regev, Mor; Honey, Christopher J; Simony, Erez; Hasson, Uri

    2013-10-02

    Linguistic content can be conveyed both in speech and in writing. But how similar is the neural processing when the same real-life information is presented in spoken and written form? Using functional magnetic resonance imaging, we recorded neural responses from human subjects who either listened to a 7 min spoken narrative or read a time-locked presentation of its transcript. Next, within each brain area, we directly compared the response time courses elicited by the written and spoken narrative. Early visual areas responded selectively to the written version, and early auditory areas to the spoken version of the narrative. In addition, many higher-order parietal and frontal areas demonstrated strong selectivity, responding far more reliably to either the spoken or written form of the narrative. By contrast, the response time courses along the superior temporal gyrus and inferior frontal gyrus were remarkably similar for spoken and written narratives, indicating strong modality-invariance of linguistic processing in these circuits. These results suggest that our ability to extract the same information from spoken and written forms arises from a mixture of selective neural processes in early (perceptual) and high-order (control) areas, and modality-invariant responses in linguistic and extra-linguistic areas.

  10. When does word frequency influence written production?

    Directory of Open Access Journals (Sweden)

    Cristina eBaus

    2013-12-01

    Full Text Available The aim of the present study was to explore the central (e.g., lexical processing and peripheral processes (motor preparation and execution underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: 1 first keystroke latency and 2 keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analysed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals. The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  11. When does word frequency influence written production?

    Science.gov (United States)

    Baus, Cristina; Strijkers, Kristof; Costa, Albert

    2013-01-01

    The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  12. IMPLEMENTING SPOKEN ENGLISH IN THE EFL CLASSROOM IN CHINA

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Spoken English is generally neglected in English as a Foreign Language(EFL) classrooms inChina.While it has now been placed on the agenda for the first time in the forthcoming new CollegeEnglish Teaching Syllabus,many teachers still wonder if it is feasible to conduct spoken English in thetypical EFL classroom due to the problems of class size and inadequate teaching time.This article will ar-gue that it is feasible to implement spoken English in the EFL classroom,even under the present circum-stances,and makes some suggestions for doing so.

  13. Linguistic determinants of word colouring in grapheme-colour synaesthesia.

    Science.gov (United States)

    Simner, Julia; Glover, Louise; Mowat, Alice

    2006-02-01

    Previous studies of grapheme-colour synaesthesia have suggested that words tend to be coloured by their initial letter or initial vowel (e.g., Baron-Cohen et al., 1993; Ward et al., 2005). We examine this assumption in two ways. First, we show that letter position and syllable stress have been confounded, such that the initial letters of a word are often in stressed position (e.g., 'wo-man, 'ta-ble, 'ha-ppy). With participant JW, we separate these factors (e.g., with stress homographs such as 'con-vict vs. con-'vict) and show that the primary determinant of word colour is syllable stress, with only a secondary influence of letter position. We show that this effect derives from conceptual rather than perceptual stress, and that the effect is more prominent for synaesthetes whose words are coloured by vowels than by consonants. We examine, too, the time course of word colour generation. Slower colour naming occurs for spoken versus written stimuli, as we might expect from the additional requirement of grapheme conversion in the former. Reaction time data provide evidence, too, of incremental processing, since word colour is generated faster when the dominant grapheme is flagged early rather than late in the spoken word. Finally, we examine the role of non-dominant graphemes in word colouring and show faster colour naming when later graphemes match the dominant grapheme (e.g., ether) compared to when they do not (e.g., ethos). Taken together, our findings suggest that words are coloured incrementally by a process of competition between constituent graphemes, in which stressed graphemes and word-initial graphemes are disproportionately weighted.

  14. Automatic speech recognizer based on the Spanish spoken in Valdivia, Chile

    Science.gov (United States)

    Sanchez, Maria L.; Poblete, Victor H.; Sommerhoff, Jorge

    2004-05-01

    The performance of an automatic speech recognizer is affected by training process (dependent on or independent of the speaker) and the size of the vocabulary. The language used in this study was the Spanish spoken in the city of Valdivia, Chile. A representative sample of 14 students and six professionals all natives of Valdivia (ten women and ten men) were used to complete the study. The sample ranged in age between 20 and 30 years old. Two systems were programmed based on the classical principles: digitalizing, end point detection, linear prediction coding, cepstral coefficients, dynamic time warping, and a final decision stage with a previous step of training: (i) one dependent speaker (15 words: five colors and ten numbers), (ii) one independent speaker (30 words: ten verbs, ten nouns, and ten adjectives). A simple didactical application, with options to choose colors, numbers and drawings of the verbs, nouns and adjectives, was designed to be used with a personal computer. In both programs, the tests carried out showed a tendency towards errors in short words with monosyllables like ``flor,'' and ``sol.'' The best results were obtained in words with three syllables like ``disparar'' and ``mojado.'' [Work supported by Proyecto DID UACh N S-200278.

  15. Word Domain Disambiguation via Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.

    2006-06-04

    Word subject domains have been widely used to improve the perform-ance of word sense disambiguation al-gorithms. However, comparatively little effort has been devoted so far to the disambiguation of word subject do-mains. The few existing approaches have focused on the development of al-gorithms specific to word domain dis-ambiguation. In this paper we explore an alternative approach where word domain disambiguation is achieved via word sense disambiguation. Our study shows that this approach yields very strong results, suggesting that word domain disambiguation can be ad-dressed in terms of word sense disam-biguation with no need for special purpose algorithms.

  16. Extracting Topic Words from the Web for Dialogue Sentence Generation

    OpenAIRE

    下川, 尚亮; Rafal, Rzepka; 荒木, 健治

    2009-01-01

    In this paper we extract topic words from Internet Relay Chat utterances. In such dialogues there are many more spoken language expressions than in blogs or usual Web pages and we presume that the always changing topic is difficult to determine only by nouns which are usually used for topic recognition. In this paper we propose a method for determining a conversation topic considering also association adjectives and verbs retrieved from the Web. Our first experiments show that extracting asso...

  17. Default spacing is the optimal spacing for word reading.

    Science.gov (United States)

    van den Boer, Madelon; Hakvoort, Britt E

    2015-01-01

    Increased interletter spacing is thought to reduce crowding effects and to enhance fluent reading. Several studies have shown beneficial effects of increased interletter spacing on reading speed and accuracy, especially in poor readers. Therefore, increased interletter spacing appears to be a relatively easy way to enhance reading performance. However, in adult readers reading speed was shown to be impeded with increased interletter spacing. Thus, findings on interletter spacing are still inconclusive. In the current study we examined the effect of a range of interletter spacings (-0.5, default, 0.5, 1, 1.5, 2) on naming fluency of monosyllabic and bisyllabic words in beginning (Grade 2) and more advanced (Grade 4) readers. Additionally we tested the effects of spacing in a subsample of poor readers. In contrast to previous findings, neither beginning nor advanced readers benefited from an increase in interletter spacing. However, they did show reduced reading fluency when letter spacing was smaller than the default spacing, which may be indicative of a crowding effect. Poor readers showed a similar pattern. We conclude that an increase in interletter spacing has no effect on word naming fluency.

  18. Compound nouns in spoken language production by speakers with aphasia compared to neurologically healthy speakers: an exploratory study.

    Science.gov (United States)

    Eiesland, Eli Anne; Lind, Marianne

    2012-03-01

    Compounds are words that are made up of at least two other words (lexemes), featuring lexical and syntactic characteristics and thus particularly interesting for the study of language processing. Most studies of compounds and language processing have been based on data from experimental single word production and comprehension tasks. To enhance the ecological validity of morphological processing research, data from other contexts, such as discourse production, need to be considered. This study investigates the production of nominal compounds in semi-spontaneous spoken texts by a group of speakers with fluent types of aphasia compared to a group of neurologically healthy speakers. The speakers with aphasia produce significantly fewer nominal compound types in their texts than the non-aphasic speakers, and the compounds they produce exhibit fewer different types of semantic relations than the compounds produced by the non-aphasic speakers. The results are discussed in relation to theories of language processing.

  19. Electronic Control System Of Home Appliances Using Speech Command Words

    Directory of Open Access Journals (Sweden)

    Aye Min Soe

    2015-06-01

    Full Text Available Abstract The main idea of this paper is to develop a speech recognition system. By using this system smart home appliances are controlled by spoken words. The spoken words chosen for recognition are Fan On Fan Off Light On Light Off TV On and TV Off. The input of the system takes speech signals to control home appliances. The proposed system has two main parts speech recognition and smart home appliances electronic control system. Speech recognition is implemented in MATLAB environment. In this process it contains two main modules feature extraction and feature matching. Mel Frequency Cepstral Coefficients MFCC is used for feature extraction. Vector Quantization VQ approach using clustering algorithm is applied for feature matching. In electrical home appliances control system RF module is used to carry command signal from PC to microcontroller wirelessly. Microcontroller is connected to driver circuit for relay and motor. The input commands are recognized very well. The system is a good performance to control home appliances by spoken words.

  20. Exploratory analysis of real personal emergency response call conversations: considerations for personal emergency response spoken dialogue systems.

    Science.gov (United States)

    Young, Victoria; Rochon, Elizabeth; Mihailidis, Alex

    2016-11-14

    The purpose of this study was to derive data from real, recorded, personal emergency response call conversations to help improve the artificial intelligence and decision making capability of a spoken dialogue system in a smart personal emergency response system. The main study objectives were to: develop a model of personal emergency response; determine categories for the model's features; identify and calculate measures from call conversations (verbal ability, conversational structure, timing); and examine conversational patterns and relationships between measures and model features applicable for improving the system's ability to automatically identify call model categories and predict a target response. This study was exploratory and used mixed methods. Personal emergency response calls were pre-classified according to call model categories identified qualitatively from response call transcripts. The relationships between six verbal ability measures, three conversational structure measures, two timing measures and three independent factors: caller type, risk level, and speaker type, were examined statistically. Emergency medical response services were the preferred response for the majority of medium and high risk calls for both caller types. Older adult callers mainly requested non-emergency medical service responders during medium risk situations. By measuring the number of spoken words-per-minute and turn-length-in-words for the first spoken utterance of a call, older adult and care provider callers could be identified with moderate accuracy. Average call taker response time was calculated using the number-of-speaker-turns and time-in-seconds measures. Care providers and older adults used different conversational strategies when responding to call takers. The words 'ambulance' and 'paramedic' may hold different latent connotations for different callers. The data derived from the real personal emergency response recordings may help a spoken dialogue system

  1. Proactive spoken dialogue interaction in multi-party environments

    CERN Document Server

    Strauß, Petra-Maria

    2010-01-01

    This book describes spoken dialogue systems that act as independent dialogue partners in the conversation with and between users. It presents novel methods for dialogue history and dialogue management.

  2. When semantics aids phonology: A processing advantage for iconic word forms in aphasia.

    Science.gov (United States)

    Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella

    2015-09-01

    Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms.

  3. Listening to every other word: Examining the strength of linkage variables in forming streams of speech

    OpenAIRE

    Kidd, Gerald; Best, Virginia; Mason, Christine R.

    2008-01-01

    In a variation on a procedure originally developed by Broadbent [(1952). “Failures of attention in selective listening,” J. Exp. Psychol. 44, 428–433] listeners were presented with two sentences spoken in a sequential, interleaved-word format. Sentence one (target) comprised the odd-numbered words in the sequence and sentence two (masker) comprised the even-numbered words in the sequence. The task was to report the words in sentence one. The goal was to determine the effectiveness of cues lin...

  4. Korean: A Guide to the Spoken Language.

    Science.gov (United States)

    Department of Defense, Washington, DC.

    This language guide, written for United States Armed Forces personnel, serves as an introduction to the Korean language and presents important words and phrases for use in normal conversation. Linguistic expressions are classified under the following categories: (1) greetings and general phrases, (2) location, (3) directions, (4) numbers, (5)…

  5. Cognitive, Linguistic and Print-Related Predictors of Preschool Children's Word Spelling and Name Writing

    Science.gov (United States)

    Milburn, Trelani F.; Hipfner-Boucher, Kathleen; Weitzman, Elaine; Greenberg, Janice; Pelletier, Janette; Girolametto, Luigi

    2017-01-01

    Preschool children begin to represent spoken language in print long before receiving formal instruction in spelling and writing. The current study sought to identify the component skills that contribute to preschool children's ability to begin to spell words and write their name. Ninety-five preschool children (mean age = 57 months) completed a…

  6. Tracking Eye Movements to Localize Stroop Interference in Naming: Word Planning Versus Articulatory Buffering

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2014-01-01

    Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the

  7. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use over Time

    Science.gov (United States)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    2013-01-01

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson--Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semi-fixed multi-word units (MWUs), which comprise fixed parts with the potential…

  8. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use Over Time

    NARCIS (Netherlands)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson-Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semifixed multi-word units (MWUs),

  9. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use over Time

    Science.gov (United States)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    2013-01-01

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson--Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semi-fixed multi-word units (MWUs), which comprise fixed parts with the potential…

  10. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use Over Time

    NARCIS (Netherlands)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    2013-01-01

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson-Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semifixed multi-word units (MWUs),

  11. Tracking Eye Movements to Localize Stroop Interference in Naming: Word Planning Versus Articulatory Buffering

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2014-01-01

    Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the interfer

  12. A cross-language comparison of the use of stress in word segmentation

    NARCIS (Netherlands)

    Tyler, M.D.; Perruchet, P.; Cutler, A.

    2006-01-01

    In spite of our illusions to the contrary, there are few acoustic cues to word boundaries in spoken language. While statistical probabilities between adjacent speech units provide language-general information for speech segmentation, this study shows that language-specific information may also play

  13. Tracking Eye Movements to Localize Stroop Interference in Naming: Word Planning versus Articulatory Buffering

    Science.gov (United States)

    Roelofs, Ardi

    2014-01-01

    Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the interference occurs in an articulatory buffer after word…

  14. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    Science.gov (United States)

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  15. Cognitive, Linguistic and Print-Related Predictors of Preschool Children's Word Spelling and Name Writing

    Science.gov (United States)

    Milburn, Trelani F.; Hipfner-Boucher, Kathleen; Weitzman, Elaine; Greenberg, Janice; Pelletier, Janette; Girolametto, Luigi

    2017-01-01

    Preschool children begin to represent spoken language in print long before receiving formal instruction in spelling and writing. The current study sought to identify the component skills that contribute to preschool children's ability to begin to spell words and write their name. Ninety-five preschool children (mean age = 57 months) completed a…

  16. Polysynthesis in Hueyapan Nahuatl: The Status of Noun Phrases, Basic Word Order, and Other Concerns

    DEFF Research Database (Denmark)

    Pharao Hansen, Magnus

    2010-01-01

    This article presents data showing that the syntax of the Nahuatl dialect spoken in Hueyapan, Morelos, Mexico has traits of nonconfigurationality: free word order and free pro-drop, with predicate-initial word order being pragmatically neutral. It permits discontinuous noun phrases and has....... It is suggested that the differences observed between the two Nahuatl varieties may be a result of methodological problems in MacSwan's collection of data, skewing it in the direction of a more rigid syntax....

  17. Effects of disfluencies, predictability, and utterance position on word form variation in English conversation

    Science.gov (United States)

    Bell, Alan; Jurafsky, Daniel; Fosler-Lussier, Eric; Girand, Cynthia; Gregory, Michelle; Gildea, Daniel

    2003-02-01

    Function words, especially frequently occurring ones such as (the, that, and, and of ), vary widely in pronunciation. Understanding this variation is essential both for cognitive modeling of lexical production and for computer speech recognition and synthesis. This study investigates which factors affect the forms of function words, especially whether they have a fuller pronunciation (e.g., edh,eye, edh,æ,tee, æ,en,&dee, vee) or a more reduced or lenited pronunciation (e.g., edh,schwa, edh,bari;t, n, schwa). It is based on over 8000 occurrences of the ten most frequent English function words in a 4-h sample from conversations from the Switchboard corpus. Ordinary linear and logistic regression models were used to examine variation in the length of the words, in the form of their vowel (basic, full, or reduced), and whether final obstruents were present or not. For all these measures, after controlling for segmental context, rate of speech, and other important factors, there are strong independent effects that made high-frequency monosyllabic function words more likely to be longer or have a fuller form (1) when neighboring disfluencies (such as filled pauses uh and um) indicate that the speaker was encountering problems in planning the utterance; (2) when the word is unexpected, i.e., less predictable in context; (3) when the word is either utterance initial or utterance final. Looking at the phenomenon in a different way, frequent function words are more likely to be shorter and to have less-full forms in fluent speech, in predictable positions or multiword collocations, and utterance internally. Also considered are other factors such as sex (women are more likely to use fuller forms, even after controlling for rate of speech, for example), and some of the differences among the ten function words in their response to the factors.

  18. Psycholinguistic norms for action photographs in French and their relationships with spoken and written latencies.

    Science.gov (United States)

    Bonin, Patrick; Boyer, Bruno; Méot, Alain; Fayol, Michel; Droit, Sylvie

    2004-02-01

    A set of 142 photographs of actions (taken from Fiez & Tranel, 1997) was standardized in French on name agreement, image agreement, conceptual familiarity, visual complexity, imageability, age of acquisition, and duration of the depicted actions. Objective word frequency measures were provided for the infinitive modal forms of the verbs and for the cumulative frequency of the verbal forms associated with the photographs. Statistics on the variables collected for action items were provided and compared with the statistics on the same variables collected for object items. The relationships between these variables were analyzed, and certain comparisons between the current database and other similar published databases of pictures of actions are reported. Spoken and written naming latencies were also collected for the photographs of actions, and multiple regression analyses revealed that name agreement, image agreement, and age of acquisition are the major determinants of action naming speed. Finally, certain analyses were performed to compare object and action naming times. The norms and the spoken and written naming latencies corresponding to the pictures are available on the Internet (http://www.psy.univ-bpclermont.fr/~pbonin/pbonin-eng.html) and should be of great use to researchers interested in the processing of actions.

  19. Brain Basis of Phonological Awareness for Spoken Language in Children and Its Disruption in Dyslexia

    Science.gov (United States)

    Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.

    2012-01-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783

  20. Word, Words, Words: Ellul and the Mediocritization of Language

    Science.gov (United States)

    Foltz, Franz; Foltz, Frederick

    2012-01-01

    The authors explore how technique via propaganda has replaced the word with images creating a mass society and limiting the ability of people to act as individuals. They begin by looking at how words affect human society and how they have changed over time. They explore how technology has altered the meaning of words in order to create a more…

  1. MAWRID: A Model of Arabic Word Reading in Development.

    Science.gov (United States)

    Saiegh-Haddad, Elinor

    2017-07-01

    This article offers a model of Arabic word reading according to which three conspicuous features of the Arabic language and orthography shape the development of word reading in this language: (a) vowelization/vocalization, or the use of diacritical marks to represent short vowels and other features of articulation; (b) morphological structure, namely, the predominance and transparency of derivational morphological structure in the linguistic and orthographic representation of the Arabic word; and (c) diglossia, specifically, the lexical and lexico-phonological distance between the spoken and the standard forms of Arabic words. It is argued that the triangulation of these features governs the acquisition and deployment of reading mechanisms across development. Moreover, the difficulties that readers encounter in their journey from beginning to skilled reading may be better understood if evaluated within these language-specific features of Arabic language and orthography.

  2. Vowel-Plosive of English Word Recognition using HMM

    Directory of Open Access Journals (Sweden)

    Hemakumar G

    2011-11-01

    Full Text Available This paper discusses a speech recognition based on spoken English words formed by vowel, diphthong and plosive and it has been developed and experimented for single speaker. The success rate of recognition of individually uttered words in experiments is excellent and has reached about 98.86 %. The miss rate of about 1.14% was almost only because of false acceptance. In phonemes classification on an average we have reached 85% and miss classification rate was 15%. We have successfully tested all the words formed by vowel followed by vowel-plosive or plosive-vowels or diphthong-plosive and reached high success rate in recognition of the words. All computations are performed in MATLAB and PRAAT software.

  3. Paroxysmal discharges triggered by hearing spoken language.

    Science.gov (United States)

    Tsuzuki, H; Kasuga, I

    1978-04-01

    We examined the modality of EEG activation by various kinds of acoustic stimulation in a middle-aged Japanese female with epilepsy. Paroxysmal discharges were triggered in the right frontal area (F4) by verval stimulation. For the activation of EEG, concentration of attention on the stimulation was essential; therefore paroxysmal discharges were triggered most easily by verbal stimuli when someone spoke to the patient directly. Stronger responses than usual were triggered by specific words, and apparently reflected the interest and concern of the patient. The latency from stimulation to paroxysmal discharges ranged from 230 to 1,300 msec, suggesting that the responses may have been a function of the perception and recognition of acoustic stimuli. "Heard-word epilepsy" or "Angesprochene Epilepsie" is suggested in this case.

  4. Newly learned word forms are abstract and integrated immediately after acquisition.

    Science.gov (United States)

    Kapnoula, Efthymia C; McMurray, Bob

    2016-04-01

    A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35-39, 2007; Gaskell & Dumay, Cognition, 89, 105-132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85-99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation.

  5. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  6. Vowel duration affects visual word identification: evidence that the mediating phonology is phonetically informed.

    Science.gov (United States)

    Lukatela, Georgije; Eaton, Thomas; Sabadini, Laura; Turvey, M T

    2004-02-01

    What form is the lexical phonology that gives rise to phonological effects in visual lexical decision? The authors explored the hypothesis that beyond phonological contrasts the physical phonetic details of words are included. Three experiments using lexical decision and 1 using naming compared processing times for printed words (e.g., plead and pleat) that differ, when spoken, in vowel length and overall duration. Latencies were longer for long-vowel words than for short-vowel words in lexical decision but not in naming. Further, lexical decision on long-vowel words benefited more from identity priming than lexical decision on short-vowel words, suggesting that representations of long-vowel words achieve activation thresholds more slowly. The discussion focused on phonetically informed phonologies, particularly gestural phonology and its potential for understanding reading acquisition and performance.

  7. Word recognition using ideal word patterns

    Science.gov (United States)

    Zhao, Sheila X.; Srihari, Sargur N.

    1994-03-01

    The word shape analysis approach to text recognition is motivated by discoveries in psychological studies of the human reading process. It attempts to describe and compare the shape of the word as a whole object without trying to segment and recognize the individual characters, so it bypasses the errors committed in character segmentation and classification. However, the large number of classes and large variation and distortion expected in all patterns belonging to the same class make it difficult for conventional, accurate, pattern recognition approaches. A word shape analysis approach using ideal word patterns to overcome the difficulty and improve recognition performance is described in this paper. A special word pattern which characterizes a word class is extracted from different sample patterns of the word class and stored in memory. Recognition of a new word pattern is achieved by comparing it with the special pattern of each word class called ideal word pattern. The process of generating the ideal word pattern of each word class is proposed. The algorithm was tested on a set of machine printed gray scale word images which included a wide range of print types and qualities.

  8. On the Acquisition of Word Order in WH-Questions in theTromsø Dialect

    Directory of Open Access Journals (Sweden)

    Marit Richardsen Westergaard

    2004-01-01

    Full Text Available This article reports on a study of three children acquiring a dialect of Norwegian which allows two different word orders in certain types of WH-questions, verb second (V2 and and verb third (V3. The latter is only allowed after monosyllabic WH-words, while the former, which is the result of verb movement, is the word order found in all other main clauses in the language. It is shown that both V2 and V3 are acquired extremely early by the children in the study (before the age of two, and that subtle distinctions between the two orders with respect to information structure are attested from the beginning. However, it is argued that V3 word order, which should be ìsimplerî than the V2 structure as it does not involve verb movement, is nevertheless acquired slightly later in its full syntactic form. This is taken as an indication that the V3 structure is syntactically more complex, and possibly also more marked.

  9. The intelligibility of words, sentences, and continuous discourse using the articulation index

    Science.gov (United States)

    Depaolis, R. A.

    1992-10-01

    The purpose of this research was to investigate the effect of message redundancy upon intelligibility. The original methodology for the Articulation Index (AI) French and Steinberg, J. Acoust. Soc. Am., 19, 90-119, 1947 was used to examine the relation between words, meaningful sentences, and continuous discourse (CD). One primary consideration was to derive the relations between the three speech types with tightly controlled, highly repeatable experimental conditions such that any difference between them could be attributed solely to inherent contextual differences. One male speaker recorded 616 monosyllabic words, 176 meaningful speech perception in noise (SPIN) sentences, and 44 seventh-grade reading level CD passages. Twenty-four normal hearing subjects made intelligibility estimates of the CD and sentences and identified words at each of 44 conditions of filtering and signal-to-noise ratio. The sentence intelligibility scores and continuous discourse intelligibility scores plotted versus the AI (transfer function) were within 0.05 AI of each other. The word recognition scores were considerably lower for equivalent AI values of both sentences and CD.

  10. 给声方法对人工耳蜗植入儿童词汇识别的影响研究*%The Influence of the Mode of Stimulus on Word Recognition of Children with Cochlear Implants

    Institute of Scientific and Technical Information of China (English)

    刘海红; 刘莎; 刘志成; 孔颖; 刘欣; 张杰; 葛文彤; 倪鑫

    2013-01-01

    Objective To investigate the effects of the mode of stimulus (e.g. live voice or recorded voice) on the speech recognition of children with cochlear implants (CI). Methods Thirty-nine subjects were divided into live voice stimulus group and recorded voice stimulus group randomly. The Standard Chinese version of the Lexical Neighborhood Test(LNT)was used in the evaluation. The subjects were instructed to repeat the sound they heard, and the performance was scored as the percentages of the words correctly identified. Results (1) For the live voice group, the correct recognition rates of disyllabic easy, disyllabic hard, monosyllabic easy and monosyllabic hard words were 78.67%, 65.18%, 68.32% and 58.41%, respectively.The statistical analysis showed that the score of disyllabic word recognition was significantly higher than that of monosyllabic word recognition(P<0.001), and the score of easy word recognition was significantly higher than that of hard word recognition(P<0.001). (2) For the recorded voice group, the correct recognition rates of disyllabic easy, disyllabic hard, monosyllabic easy and monosyllabic hard words were 64.67%, 50.81%, 58.32% and 45.14%, respectively. The results indicated that the performance was significantly better in disyllabic words than in monosyllabic words(P<0.001), and the performance was significantly better in easy words than in hard words(P<0.001). (3)The scores of disyllabic easy, disyllabic hard, monosyllabic easy and monosyllabic hard word recognition were significantly higher in live voice group than those in recorded voice group((P<0.001), and the individual differences were more significant in live voice group than in recorded voice group. Conclusion The recorded voice can achieve more consistency,so it is the first choice for speech recognition tests in longitudinal follow-up and multicenter research programs.%目的分析不同的给声方法(口声给声/录音给声)对人工耳蜗(Cochlear Implant,CI)植入儿

  11. Does textual feedback hinder spoken interaction in natural language?

    Science.gov (United States)

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.

  12. How sensory-motor systems impact the neural organization for language: Direct contrasts between spoken and signed language

    Directory of Open Access Journals (Sweden)

    Karen eEmmorey

    2014-05-01

    Full Text Available To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence with hearing bilinguals who are native users of American Sign Language (ASL and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the

  13. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language.

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H2 (15)O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of

  14. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    Science.gov (United States)

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  15. Learning new words from spontaneous speech: A project summary

    Science.gov (United States)

    Young, Sheryl R.

    1993-07-01

    This research develops methods that enable spoken language systems to detect and correct their own errors, automatically extending themselves to incorporate new words. The occurrence of unknown or out-of-vocabulary words is one of the major problems frustrating the use of automatic speech understanding systems in real world tasks. Novel words cause recognition errors and often result in recognition and understanding failures. Yet, they are common. Real system users speak in a spontaneous and relatively unconstrained fashion. They do not know what words the system can recognize and thereby are likely to exceed the system's coverage. Even if speakers constrained their speech, there would still be a need for self-extending systems as certain tasks inherently require dynamic vocabulary expansion (e.g. new company names, new flight destinations, etc.). Further, it is costly and labor intensive to collect enough training data to develop a representative vocabulary (lexicon) and language model for a spoken interface application. Unlike transcription tasks where it is often possible to find large amounts of on-line data from which a lexicon and language model can be developed, for many tasks this is not feasible. Developers of applications and database interfaces will probably not have the resources to gather a large corpus of examples to train a system to their specific task. Yet, most current speech and language model research is oriented toward training from large corpora. This research enables systems to be developed from small amounts of data and then 'bootstrapped'.

  16. Confucian Heritage and Spoken English Anxiety in China

    Institute of Scientific and Technical Information of China (English)

    LU Pan

    2014-01-01

    In order to promote learners’spoken English proficiency, a further understanding of the impact of Confucian tradi⁃tion, as a historically dominant ethical and philosophical system, on learner anxiety and eventually on learning efficiency in a spo⁃ken English class is generated, and a theoretical analysis is conducted, from the perspectives of second language acquisition and in⁃tercultural communication. Some Confucian values including self-restraint, propriety observation, paying reverence to teachers and face-keeping principle may increase learner anxiety in oral English class, hindering the development of their spoken English proficiency. Accordingly, a set of teaching strategies to help control culture-related anxiety in spoken English class are systemati⁃cally presented.

  17. Pupils' Knowledge and Spoken Literary Response beyond Polite Meaningless Words: Studying Yeats's "Easter, 1916"

    Science.gov (United States)

    Gordon, John

    2016-01-01

    This article presents research exploring the knowledge pupils bring to texts introduced to them for literary study, how they share knowledge through talk, and how it is elicited by the teacher in the course of an English lesson. It sets classroom discussion in a context where new examination requirements diminish the relevance of social, cultural…

  18. The power of the spoken word : Political mobilization and nation-building by Kuyper and Gladstone

    NARCIS (Netherlands)

    Hoekstra, H

    2003-01-01

    This article addresses the question why in the Netherlands it was the orthodox protestants who were able to mobilize the masses and not the political establishments of liberals and enlightened protestants during the latter part of the nineteenth century. The biblical rhetoric of their leader Abraham

  19. The Power of the Spoken Word in Defining Religion and Thought: A Case Study

    Directory of Open Access Journals (Sweden)

    Hilary Watt

    2009-01-01

    Full Text Available This essay explores the relationship between religion and language through a literature review of animist scholarship and, in particular, a case study of the animist worldview of Hmong immigrants to the United States. An analysis of the existing literature reveals how the Hmong worldview (which has remained remarkably intact despite widely dispersed settlements both informs and is informed by the Hmong language. Hmong is contrasted with English with regard to both languages’ respective affinities to the scientific worldview and Christianity. I conclude that Hmong and other "pre-scientific" languages have fundamental incompatibilities with the Western worldview (which both informs and is informed by dualistic linguistic conventions of modern language, a modern notion of scientific causality, and Judeo-Christian notions of the body/soul dichotomy. This incompatibility proves to be a major stumbling block for Western scholars of animist religion, who bring their own linguistic and cultural biases to their scholarship.

  20. Theories of Spoken Word Recognition Deficits in Aphasia: Evidence from Eye-Tracking and Computational Modeling

    Science.gov (United States)

    Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E.; Magnuson, James S.

    2011-01-01

    We used eye-tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., "carrot-parrot") and cohort (e.g., "beaker-beetle") competitors. Broca's aphasic participants exhibited larger rhyme competition effects than age-matched controls. A re-analysis of previously reported data (Yee,…

  1. Influence of Eye Gaze on Spoken Word Processing: An ERP Study with Infants

    Science.gov (United States)

    Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Friederici, Angela D.

    2011-01-01

    Eye gaze is an important communicative signal, both as mutual eye contact and as referential gaze to objects. To examine whether attention to speech versus nonspeech stimuli in 4- to 5-month-olds (n = 15) varies as a function of eye gaze, event-related brain potentials were used. Faces with mutual or averted gaze were presented in combination with…

  2. Slam and the Citizen Orator: Teaching Civic Oration and Engagement through Spoken Word

    Science.gov (United States)

    Wells, Celeste C.; DeLeon, Daniel

    2015-01-01

    The activity described in this paper was developed in response to the experience of teaching a large lecture introduction course to freshman and sophomore undergraduates called "The Rhetorical Tradition." This course covers, roughly, the last 2500 years of rhetoric. One of the issues faced in this course is that students struggle to…

  3. You had me at "Hello": Rapid extraction of dialect information from spoken words.

    Science.gov (United States)

    Scharinger, Mathias; Monahan, Philip J; Idsardi, William J

    2011-06-15

    Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception.

  4. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Acheson, D.J.; Takashima, A.

    2013-01-01

    ulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and

  5. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Acheson, D.J.; Takashima, A.

    2013-01-01

    ulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and mon

  6. The power of the spoken word : Political mobilization and nation-building by Kuyper and Gladstone

    NARCIS (Netherlands)

    Hoekstra, H

    2003-01-01

    This article addresses the question why in the Netherlands it was the orthodox protestants who were able to mobilize the masses and not the political establishments of liberals and enlightened protestants during the latter part of the nineteenth century. The biblical rhetoric of their leader Abraham

  7. Damage to Temporo-Parietal Cortex Decreases Incidental Activation of Thematic Relations during Spoken Word Comprehension

    Science.gov (United States)

    Mirman, Daniel; Graziano, Kristen M.

    2012-01-01

    Both taxonomic and thematic semantic relations have been studied extensively in behavioral studies and there is an emerging consensus that the anterior temporal lobe plays a particularly important role in the representation and processing of taxonomic relations, but the neural basis of thematic semantics is less clear. We used eye tracking to…

  8. Visual phonology: the effects of orthographic consistency on different auditory word recognition tasks.

    Science.gov (United States)

    Ziegler, Johannes C; Ferrand, Ludovic; Montant, Marie

    2004-07-01

    In this study, we investigated orthographic influences on spoken word recognition. The degree of spelling inconsistency was manipulated while rime phonology was held constant. Inconsistent words with subdominant spellings were processed more slowly than inconsistent words with dominant spellings. This graded consistency effect was obtained in three experiments. However, the effect was strongest in lexical decision, intermediate in rime detection, and weakest in auditory naming. We conclude that (1) orthographic consistency effects are not artifacts of phonological, phonetic, or phonotactic properties of the stimulus material; (2) orthographic effects can be found even when the error rate is extremely low, which rules out the possibility that they result from strategies used to reduce task difficulty; and (3) orthographic effects are not restricted to lexical decision. However, they are stronger in lexical decision than in other tasks. Overall, the study shows that learning about orthography alters the way we process spoken language.

  9. Acquisition and generalization of key word signing by three children with autism.

    Science.gov (United States)

    Tan, Xuet Ying; Trembath, David; Bloomberg, Karen; Iacono, Teresa; Caithness, Teena

    2014-04-01

    The aim of this study was to examine the effect of Key Word Sign (KWS) intervention on the acquisition and generalization of manual signing among three children with Autism Spectrum Disorder (ASD), and to measure any changes in their production of spoken words and gestures following intervention. A multiple baseline single-case experimental design was used to measure changes for each of the three children. All three children began using signs following the introduction of the KWS intervention, and generalized their use of some signs across activities. The introduction of the intervention was associated with either neutral, or statistically significantly positive, changes in the children's production of spoken words and natural gestures. The results provide preliminary evidence for the effectiveness of KWS for preschool children with ASD, which parents, therapists, and educators can use to inform clinical practice.

  10. 从网络新词“duāng"的形成看单音节拟声词的特点%Characteristics of Chinese Monosyllabic Onomatopoeia Based on Network New Words "Duāng”

    Institute of Scientific and Technical Information of China (English)

    朱俊阳

    2015-01-01

    网络新词“duāng”以及一些网络单音节拟声词的新兴,对传统汉语单音节拟声词提出了挑战.它们的形成必须符合单音节拟声词语音、文字、语法的普遍特点,这样才能固定下来成为语言符号的发展趋势,同时也为未来出现的新单音节拟声词提供必要的形成依据.

  11. Identifying Discourse Markers in Spoken Dialog

    CERN Document Server

    Heeman, P A; Allen, J F; Heeman, Peter A.; Byron, Donna; Allen, James F.

    1998-01-01

    In this paper, we present a method for identifying discourse marker usage in spontaneous speech based on machine learning. Discourse markers are denoted by special POS tags, and thus the process of POS tagging can be used to identify discourse markers. By incorporating POS tagging into language modeling, discourse markers can be identified during speech recognition, in which the timeliness of the information can be used to help predict the following words. We contrast this approach with an alternative machine learning approach proposed by Litman (1996). This paper also argues that discourse markers can be used to help the hearer predict the role that the upcoming utterance plays in the dialog. Thus discourse markers should provide valuable evidence for automatic dialog act prediction.

  12. Jasper Johns' Painted Words.

    Science.gov (United States)

    Levinger, Esther

    1989-01-01

    States that the painted words in Jasper Johns' art act in two different capacities: concealed words partake in the artist's interrogation of visual perception; and visible painted words question classical representation. Argues that words are Johns' means of critiquing modernism. (RS)

  13. Aspects of Authentic Spoken German: Awareness and Recognition of Elision in the German Classroom

    Science.gov (United States)

    Lightfoot, Douglas

    2016-01-01

    This work discusses the importance of spoken German in classroom instruction. The paper examines the nature of natural spoken language as opposed to written language. We find a general consensus that the prevailing language measure (whether pertaining to written or spoken language) in instructional settings more often typifies the rules associated…

  14. Discourse Markers and Spoken English: Nonnative Use in the Turkish EFL Setting

    Science.gov (United States)

    Asik, Asuman; Cephe, Pasa Tevfik

    2013-01-01

    This study investigated the production of discourse markers by non-native speakers of English and their occurrences in their spoken English by comparing them with those used in native speakers' spoken discourse. Because discourse markers (DMs) are significant items in spoken discourse of native speakers, a study about the use of DMs by nonnative…

  15. An Analysis on Sources of Learners' Spoken English Errors

    Institute of Scientific and Technical Information of China (English)

    肖晓花

    2011-01-01

    Chinese learners often commit different oral errors in their English learning process.Having a clear knowledge of the sources of these errors will help learners learn English better.This paper makes a study of the sources of spoken English errors.It start

  16. Using Pictographs To Enhance Recall of Spoken Medical Instructions.

    Science.gov (United States)

    Houts, Peter S.; Bachrach, Rebecca; Witmer, Judith T.; Tringali, Carol A.; Bucher, Julia A.; Localio, Russell A.

    1998-01-01

    Tests the hypothesis that pictographs can improve recall of spoken medical instructions. Junior college subjects (N=21) listened to two lists of actions, one of which was accompanied by pictographs during both listening and recall while the other was not. Mean correct recall was 85% with pictographs and 14% without, indicating that pictographs can…

  17. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  18. Associations among Play, Gesture and Early Spoken Language Acquisition

    Science.gov (United States)

    Hall, Suzanne; Rumney, Lisa; Holler, Judith; Kidd, Evan

    2013-01-01

    The present study investigated the developmental interrelationships between play, gesture use and spoken language development in children aged 18-31 months. The children completed two tasks: (i) a structured measure of pretend (or "symbolic") play and (ii) a measure of vocabulary knowledge in which children have been shown to gesture.…

  19. Animated and Static Concept Maps Enhance Learning from Spoken Narration

    Science.gov (United States)

    Adesope, Olusola O.; Nesbit, John C.

    2013-01-01

    An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…

  20. Lexicon Optimization for Dutch Speech Recognition in Spoken Document Retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland; Hessen, van Arjan; Jong, de Franciska

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  1. Lexicon optimization for Dutch speech recognition in spoken document retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland; Hessen, van Arjan; Jong, de Franciska

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  2. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  3. Processing spoken lectures in resource-scarce environments

    CSIR Research Space (South Africa)

    Van Heerden, CJ

    2011-11-01

    Full Text Available Initial work towards processing Afrikaans spoken lectures in a resource-scarce environment is presented. Two approaches to acoustic modeling for eventual alignment are compared: (a) using a well-trained target-language acoustic model and (b) using...

  4. Attitudes towards Literary Tamil and Standard Spoken Tamil in Singapore

    Science.gov (United States)

    Saravanan, Vanithamani

    2007-01-01

    This is the first empirical study that focused on attitudes towards two varieties of Tamil, Literary Tamil (LT) and Standard Spoken Tamil (SST), with the multilingual state of Singapore as the backdrop. The attitudes of 46 Singapore Tamil teachers towards speakers of LT and SST were investigated using the matched-guise approach along with…

  5. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, M.; Weegels, M.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication, dial

  6. Reader for Advanced Spoken Tamil, Parts 1 and 2.

    Science.gov (United States)

    Schiffman, Harold F.

    Part 1 of this reader consists of transcriptions of five Tamil radio plays, with exercises, notes, and discussion. Part 2 is a synopsis grammar and a glossary. Both are intended for advanced students of Tamil who have had at least two years of instruction in the spoken language at the college level. The materials have been tested in classroom use…

  7. Porting a spoken language identification systen to a new environment.

    CSIR Research Space (South Africa)

    Peche, M

    2008-11-01

    Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...

  8. Learning unification-based grammars using the Spoken English Corpus

    CERN Document Server

    Osborne, M; Osborne, Miles; Bridge, Derek

    1994-01-01

    This paper describes a grammar learning system that combines model-based and data-driven learning within a single framework. Our results from learning grammars using the Spoken English Corpus (SEC) suggest that combined model-based and data-driven learning can produce a more plausible grammar than is the case when using either learning style isolation.

  9. Developing Deployable Spoken Language Translation Systems given Limited Resources

    OpenAIRE

    Eck, Matthias

    2008-01-01

    Approaches are presented that support the deployment of spoken language translation systems. Newly developed methods allow low cost portability to new language pairs. Proposed translation model pruning techniques achieve a high translation performance even in low memory situations. The named entity and specialty vocabulary coverage, particularly on small and mobile devices, is targeted to an individual user by translation model personalization.

  10. Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on languages spoken by English learners (ELs) are: (1) Twenty most common EL languages, as reported in states' top five lists: SY 2013-14; (2) States,…

  11. Animated and Static Concept Maps Enhance Learning from Spoken Narration

    Science.gov (United States)

    Adesope, Olusola O.; Nesbit, John C.

    2013-01-01

    An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…

  12. Flipper: An Information State Component for Spoken Dialogue Systems

    NARCIS (Netherlands)

    ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn

    This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.

  13. Spoken Grammar and ELT Course Materials: A Missing Link?

    Science.gov (United States)

    Cullen, Richard; Kuo, I-Chun

    2007-01-01

    Drawing on the evidence of a growing body of corpus research over the past two decades, this article investigates the phenomenon of spoken grammar in conversational English and the extent to which our current knowledge of the area is reflected in contemporary textbooks for English as a foreign language (EFL) learners. The article reports on a…

  14. A spoken document retrieval application in the oral history domain

    NARCIS (Netherlands)

    Huijbregts, Marijn; Ordelman, Roeland; Jong, de Franciska

    2005-01-01

    The application of automatic speech recognition in the broadcast news domain is well studied. Recognition performance is generally high and accordingly, spoken document retrieval can successfully be applied in this domain, as demonstrated by a number of commercial systems. In other domains, a simila

  15. Word-final stops in Brazilian Portuguese English: acquisition and pronunciation instruction

    Directory of Open Access Journals (Sweden)

    Walcir Cardoso

    2010-11-01

    Full Text Available This paper presents current research on the second language acquisition of English phonology and its implication for (and applications to pronunciation instruction in the language classroom. More specifically, the paper follows the development of English word-final consonants by Brazilian Portuguese speakers learning English as a foreign language. The findings of two parallel studies reveal that the acquisition of these constituents is motivated by both extralinguistic (proficiency, style and linguistic (word size, place of articulation factors, and that the process is mediated by an intermediate stage characterized by consonant lengthening or aspiration (Onset-Nucleus sharing. Based on these results, I propose that the segments and environments that seem to delay coda production (i.e., monosyllabic words, labial and dorsal consonants should be given priority in pronunciation instruction. Along the lines of Dickerson (1975, this paper proposes (what we believe is a more effective and socially realistic pedagogy for the teaching of English pronunciation within an approach that recognizes that "variability is the norm rather than the exception" in second language acquisition.

  16. When is a word a word?

    Science.gov (United States)

    Vihman, M M; McCune, L

    1994-10-01

    Although adult-based words co-occur in the period of transition to speech with a variety of non-word vocalizations, little attention has been given to the formidable problem of identifying these earliest words. This paper specifies explicit, maximally 'inclusive' identification procedures, with criteria based on both phonetic and contextual parameters. A formal system for evaluating phonetic match is suggested, as well as a set of child-derived functional categories reflecting use in context. Analysis of word use across two samples of 10 children each, followed from 0;9 to 1;4, provides evidence to suggest that context-bound words can be 'trained' by focusing on eliciting language, but that the timing of context-flexible word use remains independent of such training.

  17. The Impact of Diglossia on Voweled and Unvoweled Word Reading in Arabic: A Developmental Study from Childhood to Adolescence

    Science.gov (United States)

    Saiegh-Haddad, Elinor; Schiff, Rachel

    2016-01-01

    All native speakers of Arabic read in a language variety that is remarkably distant from the one they use in everyday speech. The study tested the impact of this distance on reading accuracy and fluency by comparing reading of Standard Arabic (StA) words, used in StA only, versus Spoken Arabic (SpA) words, used in SpA too, among Arabic native…

  18. When Actions Speak Louder Than Words

    Directory of Open Access Journals (Sweden)

    Justine McGovern

    2016-07-01

    Full Text Available Through the lens of a study exploring dementia care partnering, the purpose of this methods article is to focus on the role of artifacts and embodied data in data collection. In addition, it illustrates how to use a range of data collecting methods. The article identifies benefits of additional data collecting methods to research and care. These include the need to expand data collecting methods beyond spoken word, integrate a range of data collecting approaches into research courses across disciplines, increase support of qualitative research, and advocate for greater inclusivity in research. Data collecting approaches can also have implications for quality of life among persons often excluded from research-building endeavors. They can contribute to the unfolding of new findings, which can influence care practices.

  19. A Few Words about Words | Poster

    Science.gov (United States)

    By Ken Michaels, Guest Writer In Shakepeare’s play “Hamlet,” Polonius inquires of the prince, “What do you read, my lord?” Not at all pleased with what he’s reading, Hamlet replies, “Words, words, words.”1 I have previously described the communication model in which a sender encodes a message and then sends it via some channel (or medium) to a receiver, who decodes the message and, ideally, understands what was sent. Surely the most common way of encoding a message is in choosing the most appropriate words for the listener or reader.

  20. A Few Words about Words | Poster

    Science.gov (United States)

    By Ken Michaels, Guest Writer In Shakepeare’s play “Hamlet,” Polonius inquires of the prince, “What do you read, my lord?” Not at all pleased with what he’s reading, Hamlet replies, “Words, words, words.”1 I have previously described the communication model in which a sender encodes a message and then sends it via some channel (or medium) to a receiver, who decodes the message and, ideally, understands what was sent. Surely the most common way of encoding a message is in choosing the most appropriate words for the listener or reader.

  1. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  2. Zipf's Law for Word Frequencies: Word Forms versus Lemmas in Long Texts.

    Science.gov (United States)

    Corral, Álvaro; Boleda, Gemma; Ferrer-i-Cancho, Ramon

    2015-01-01

    Zipf's law is a fundamental paradigm in the statistics of written and spoken natural language as well as in other communication systems. We raise the question of the elementary units for which Zipf's law should hold in the most natural way, studying its validity for plain word forms and for the corresponding lemma forms. We analyze several long literary texts comprising four languages, with different levels of morphological complexity. In all cases Zipf's law is fulfilled, in the sense that a power-law distribution of word or lemma frequencies is valid for several orders of magnitude. We investigate the extent to which the word-lemma transformation preserves two parameters of Zipf's law: the exponent and the low-frequency cut-off. We are not able to demonstrate a strict invariance of the tail, as for a few texts both exponents deviate significantly, but we conclude that the exponents are very similar, despite the remarkable transformation that going from words to lemmas represents, considerably affecting all ranges of frequencies. In contrast, the low-frequency cut-offs are less stable, tending to increase substantially after the transformation.

  3. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  4. Word Intelligibility in Multi-voice Singing: The Influence of Chorus Size.

    Science.gov (United States)

    Condit-Schultz, Nathaniel; Huron, David

    2017-01-01

    This study investigated how the intelligibility of sung words is influenced by the number of singers in a choral music style. The study used repeated measures factorial. One hundred forty-nine participants listened to recordings of spoken and sung English words and attempted to identify the words. Each stimuli word was sung or spoken in sync by either one, four, eight, sixteen, or twenty-seven members of a high-quality Soprano Alto Tenor Bass (SATB) choir. In general, single-voice word recognition was higher than multi-voice word recognition in the sung condition. However, the difference between four concurrent singers and the full choir was negligible; that is, reduced intelligibility with multiple singers shows little sensitivity to the number of singers. The principal effect of voice density on intelligibility is found to occur with coda consonants-a result consistent with the importance many choral conductors attribute to coordinating word offsets. In particular, the plosives /b/, /d/, /g/, and /p/ are easily confused. Coda liquids (/l/,/r/) were also found to be a source of confusion. Finally, an increasing density of voices appears to have a facilitating effect for the coda nasal /m/. Groups of four or more choral singers do appear to be less intelligible than single singers, although the observed effect is modest. However, increasing the number of singers in a choral texture beyond four singers does not appear to further degrade intelligibility. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  5. Word 2013 for dummies

    CERN Document Server

    Gookin, Dan

    2013-01-01

    This bestselling guide to Microsoft Word is the first and last word on Word 2013 It's a whole new Word, so jump right into this book and learn how to make the most of it. Bestselling For Dummies author Dan Gookin puts his usual fun and friendly candor back to work to show you how to navigate the new features of Word 2013. Completely in tune with the needs of the beginning user, Gookin explains how to use Word 2013 quickly and efficiently so that you can spend more time working on your projects and less time trying to figure it all out. Walks you through the capabilit

  6. Professional WordPress

    CERN Document Server

    Stern, Hal; Williams, Brad

    2010-01-01

    An in-depth look at the internals of the WordPress system.As the most popular blogging and content management platform available today, WordPress is a powerful tool. This exciting book goes beyond the basics and delves into the heart of the WordPress system, offering overviews of the functional aspects of WordPress as well as plug-in and theme development. What is covered in this book?: WordPress as a Content Management System; Hosting Options; Installing WordPress Files; Database Configuration; Dashboard Widgets; Customizing the Dashboard; Creating and Managing Content; Categorizing Your Cont

  7. Combinatorics on words Christoffel words and repetitions in words

    CERN Document Server

    Berstel, Jean; Reutenauer, Christophe; Saliola, Franco V

    2008-01-01

    The two parts of this text are based on two series of lectures delivered by Jean Berstel and Christophe Reutenauer in March 2007 at the Centre de Recherches Mathématiques, Montréal, Canada. Part I represents the first modern and comprehensive exposition of the theory of Christoffel words. Part II presents numerous combinatorial and algorithmic aspects of repetition-free words stemming from the work of Axel Thue-a pioneer in the theory of combinatorics on words. A beginner to the theory of combinatorics on words will be motivated by the numerous examples, and the large variety of exercises, which make the book unique at this level of exposition. The clean and streamlined exposition and the extensive bibliography will also be appreciated. After reading this book, beginners should be ready to read modern research papers in this rapidly growing field and contribute their own research to its development. Experienced readers will be interested in the finitary approach to Sturmian words that Christoffel words offe...

  8. The effect of filled pauses on the processing of the surface form and the establishment of causal connections during the comprehension of spoken expository discourse.

    Science.gov (United States)

    Cevasco, Jazmín; van den Broek, Paul

    2016-05-01

    The purpose of this study was to examine the effect of filled pauses (uh) on the verification of words and the establishment of causal connections during the comprehension of spoken expository discourse. With this aim, we asked Spanish-speaking students to listen to excerpts of interviews with writers, and to perform a word-verification task and a question-answering task on causal connectivity. There were two versions of the excerpts: filled pause present and filled pause absent. Results indicated that filled pauses increased verification times for words that preceded them, but did not make a difference on response times to questions on causal connectivity. The results suggest that, as signals of delay, filled pauses create a break with surface information, but they do not have the same effect on the establishment of meaningful connections.

  9. Contending with foreign accent in early word learning.

    Science.gov (United States)

    Schmale, Rachel; Hollich, George; Seidl, Amanda

    2011-11-01

    By their second birthday, children are beginning to map meaning to form with relative ease. One challenge for these developing abilities is separating information relevant to word identity (i.e. phonemic information) from irrelevant information (e.g. voice and foreign accent). Nevertheless, little is known about toddlers' abilities to ignore irrelevant phonetic detail when faced with the demanding task of word learning. In an experiment with English-learning toddlers, we examined the impact of foreign accent on word learning. Findings revealed that while toddlers aged 2 ; 6 successfully generalized newly learned words spoken by a Spanish-accented speaker and a native English speaker, success of those aged 2 ; 0 was restricted. Specifically, toddlers aged 2 ; 0 failed to generalize words when trained by the native English speaker and tested by the Spanish-accented speaker. Data suggest that exposure to foreign accent in training may promote generalization of newly learned forms. These findings are considered in the context of developmental changes in early word representations.

  10. Type of object motion facilitates word mapping by preverbal infants.

    Science.gov (United States)

    Matatyaho-Bullaro, Dalit J; Gogate, Lakshmi; Mason, Zachary; Cadavid, Steven; Abdel-Mottaleb, Mohammed

    2014-02-01

    This study assessed whether specific types of object motion, which predominate in maternal naming to preverbal infants, facilitate word mapping by infants. A total of 60 full-term 8-month-old infants were habituated to two spoken words, /bæf/ and /wem/, synchronous with the handheld motions of a toy dragonfly and a fish or a lamb chop and a squiggly. They were presented in one of four experimental motion conditions-shaking, looming, upward, and sideways-and one all-motion control condition. Infants were then given a test that consisted of two mismatch (change) and two control (no-change) trials, counterbalanced for order. Results revealed that infants learned the word-object relations (i.e., looked longer on the mismatch trials relative to the control trials) in the shaking and looming motion conditions but not in the upward, sideways, and all-motion conditions. Infants learned the word-object relations in the looming and shaking conditions likely because these motions foreground the object for the infants. Thus, the type of gesture an adult uses matters during naming when preverbal infants are beginning to map words onto objects. The results suggest that preverbal infants learn word-object relations within an embodied system involving matches between infants' perception of motion and specific motion properties of caregivers' naming.

  11. Fast mapping of novel word forms traced neurophysiologically

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2011-11-01

    Full Text Available Human capacity to quickly learn new words, critical for our ability to communicate using language, is well-known from behavioural studies and observations, but its neural underpinnings remain unclear. In this study, we have used event-related potentials to record brain activity to novel spoken word forms as they are being learnt by the human nervous system through passive auditory exposure. We found that the brain response dynamics change dramatically within the short (20 min exposure session: as the subjects become familiarised with the novel word forms, the early (~100 ms fronto-central activity they elicit increases in magnitude and becomes similar to that of known real words. At the same time, acoustically similar real words used as control stimuli show a relatively stable response throughout the recording session; these differences between the stimulus groups are confirmed using both factorial and linear regression analyses. Furthermore, acoustically matched novel non-speech stimuli do not demonstrate similar response increase, suggesting neural specificity of this rapid learning phenomenon to linguistic stimuli. Left-lateralised perisylvian cortical networks appear to be underlying such fast mapping of novel word forms unto the brain’s mental lexicon.

  12. Getting the "Words" In

    Science.gov (United States)

    Bolinger, Dwight

    1970-01-01

    Suggests that grammar is not something into which words are plugged but is rather a mechanism by which words are served and that linguistics scientists must begin to devote a major part of their attention to lexicology. (TO)

  13. Understanding Medical Words

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Understanding Medical Words Past Issues / Summer 2009 Table of Contents For ... Medicine that teaches you about many of the words related to your health care Do you have ...

  14. Hybrid Transfer in an English-French Spoken Language Translator

    CERN Document Server

    Rayner, M; Rayner, Manny; Bouillon, Pierrette

    1995-01-01

    The paper argues the importance of high-quality translation for spoken language translation systems. It describes an architecture suitable for rapid development of high-quality limited-domain translation systems, which has been implemented within an advanced prototype English to French spoken language translator. The focus of the paper is the hybrid transfer model which combines unification-based rules and a set of trainable statistical preferences; roughly, rules encode domain-independent grammatical information and preferences encode domain-dependent distributional information. The preferences are trained from sets of examples produced by the system, which have been annotated by human judges as correct or incorrect. An experiment is described in which the model was tested on a 2000 utterance sample of previously unseen data.

  15. In a Word, History

    Science.gov (United States)

    Dohan, Mary Helen

    1977-01-01

    Understanding words like "bionics" will open the mind to the horizons of another time when words like "railroad" evoked wonder and "to fly to the moon" was a metaphor for the impossible dream. Suggests that history teachers and English teachers should join together in using words to teach both subjects. (Editor/RK)

  16. Formative Feedback in an Interactive Spoken CALL System

    OpenAIRE

    Tsourakis, Nikolaos; Rayner, Emmanuel; Baur, Claudia

    2014-01-01

    By definition spoken dialogue CALL systems should be easy to use and understand. However, interaction in this context is often far from unhindered. In this paper we introduce a formative feedback mechanism in our CALL system, which can monitor interaction, report errors and provide advice and suggestions to users. The distinctive feature of this mechanism is the ability to combine information from different sources and decide on the most pertinent feedback, which can also be adapted in terms ...

  17. A Platform for Multilingual Research in Spoken Dialogue Systems

    Science.gov (United States)

    2000-08-01

    Boulder; Boulder, CO, 80309, USA § Universidad de las Americas; 72820 Santa Catarina Martir; Puebla , Mexico *Center for Spoken Language Understanding (CSLU...research, it is necessary to develop use at the Universidad de las Americas in Puebla available, usable, and powerful tools and corpora to (UDLA). In...OGI and UDLA, the Universidad de las displaying, labeling, and manipulating speech. The Americas, Puebla . The collaboration aimed to establish Toolkit

  18. Error Awareness and Recovery in Conversational Spoken Language Interfaces

    Science.gov (United States)

    2007-05-01

    Olympus. Modulo an initial lack of documentation, no major problems were encountered in the development of this system. In terms of the actual...updating framework was implemented as part of a generic dialog engine, decoupled from any particular dialog task. Modulo training data require- ments, the...273. [106] Schatzmann, J., Georgila, K., and Young , S., 2005 - Quantitative Evaluation of User Simulation Techniques for Spoken Dialogue Systems

  19. 智利高中生习得普通话声调的实验研究%Acoustic Experimental Research on Chilean High School Students’ Acquisition of Mandarin Monosyllabic Tone

    Institute of Scientific and Technical Information of China (English)

    周镇

    2016-01-01

    对智利一所公立高中二年级的学生进行普通话四声的听辨测试和单字调声学实验研究,得出听辨容易度顺序为:去声>上声>阴平>阳平;四声发音正确率排序为:去声>上声>阴平>阳平。通过此次个体研究的结论进行问题分析,可制定出针对智利等其他西语国家汉语学习者简单且高效的四声教学方案。%This research mainly explored the 2nd-grade students’ acquisition of mandarin monosyllabic tone at a public Chil-ean high school. Through the acoustic experiment, it could be summarized that T4 was the easiest for them to be recognized a-coustically and T2 was the most difficult one, with the order of easiness should be T4>T3>T1>T2. The accuracy of the sample pronunciation could be sorted as T4>T3>T1>T2. According to the conclusion, relevant phonetic problems were analyzed, and ef-fective teaching plans for learners from Chile and even from Spanish-speaking countries were proposed.

  20. The effects of limited bandwidth and noise on verbal processing time and word recall in normal-hearing children.

    Science.gov (United States)

    McCreery, Ryan W; Stelmachowicz, Patricia G

    2013-09-01

    Understanding speech in acoustically degraded environments can place significant cognitive demands on school-age children who are developing the cognitive and linguistic skills needed to support this process. Previous studies suggest the speech understanding, word learning, and academic performance can be negatively impacted by background noise, but the effect of limited audibility on cognitive processes in children has not been directly studied. The aim of the present study was to evaluate the impact of limited audibility on speech understanding and working memory tasks in school-age children with normal hearing. Seventeen children with normal hearing between 6 and 12 years of age participated in the present study. Repetition of nonword consonant-vowel-consonant stimuli was measured under conditions with combinations of two different signal to noise ratios (SNRs; 3 and 9 dB) and two low-pass filter settings (3.2 and 5.6 kHz). Verbal processing time was calculated based on the time from the onset of the stimulus to the onset of the child's response. Monosyllabic word repetition and recall were also measured in conditions with a full bandwidth and 5.6 kHz low-pass cutoff. Nonword repetition scores decreased as audibility decreased. Verbal processing time increased as audibility decreased, consistent with predictions based on increased listening effort. Although monosyllabic word repetition did not vary between the full bandwidth and 5.6 kHz low-pass filter condition, recall was significantly poorer in the condition with limited bandwidth (low pass at 5.6 kHz). Age and expressive language scores predicted performance on word recall tasks, but did not predict nonword repetition accuracy or verbal processing time. Decreased audibility was associated with reduced accuracy for nonword repetition and increased verbal processing time in children with normal hearing. Deficits in free recall were observed even under conditions where word repetition was not affected