WorldWideScience

Sample records for spoken word presentations

  1. Towards Affordable Disclosure of Spoken Word Archives

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; Heeren, W.F.L.; Huijbregts, M.A.H.; Hiemstra, Djoerd; de Jong, Franciska M.G.; Larson, M; Fernie, K; Oomen, J; Cigarran, J.

    2008-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken word archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, the least we want to be

  2. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  3. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  4. Interference of spoken word recognition through phonological priming from visual objects and printed words

    NARCIS (Netherlands)

    McQueen, J.M.; Hüttig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase

  5. Interference of spoken word recognition through phonological priming from visual objects and printed words

    OpenAIRE

    McQueen, J.; Huettig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of pre-exposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g...

  6. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  7. Auditory semantic processing in dichotic listening: effects of competing speech, ear of presentation, and sentential bias on N400s to spoken words in context.

    Science.gov (United States)

    Carey, Daniel; Mercure, Evelyne; Pizzioli, Fabrizio; Aydelott, Jennifer

    2014-12-01

    The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Rapid modulation of spoken word recognition by visual primes.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  9. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  10. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    Science.gov (United States)

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  11. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  12. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    Science.gov (United States)

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  13. Attention to spoken word planning: Chronometric and neuroimaging evidence

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,

  14. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  15. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  16. Talker and background noise specificity in spoken word recognition memory

    Directory of Open Access Journals (Sweden)

    Angela Cooper

    2017-11-01

    Full Text Available Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition and background noise (noise condition using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.

  17. The Activation of Embedded Words in Spoken Word Recognition

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G.

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407

  18. The Activation of Embedded Words in Spoken Word Recognition.

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster ) indexed activation of the embedded words (e.g., ham ). When the listening conditions were optimal, isolated embedded words (e.g., ham ) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster ), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions.

  19. Comparison of Word Intelligibility in Spoken and Sung Phrases

    Directory of Open Access Journals (Sweden)

    Lauren B. Collister

    2008-09-01

    Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.

  20. Talker and background noise specificity in spoken word recognition memory

    OpenAIRE

    Cooper, Angela; Bradlow, Ann R.

    2017-01-01

    Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...

  1. V2 word order in subordinate clauses in spoken Danish

    DEFF Research Database (Denmark)

    Jensen, Torben Juel; Christensen, Tanya Karoli

    are asymmetrically distributed, we argue that the word order difference should rather be seen as a signal of (subtle) semantic differences. In main clauses, V3 is highly marked in comparison to V2, and occurs in what may be called emotives. In subordinate clauses, V2 is marked and signals what has been called...... ”assertiveness”, but is rather a question of foregrounding (cf. Simons 2007: Main Point of Utterance). The paper presents the results of a study of word order in subordinate clauses in contemporary spoken Danish and focuses on how to include the proposed semantic difference as a factor influencing the choice...... studies of two age cohorts of speakers in Copenhagen, recorded in the 1980s and again in 2005-07, and on recent recordings with two age cohorts of speakers from the western part of Jutland. This makes it possible to study variation and change with respect to word order in subordinate clauses in both real...

  2. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  3. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  4. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    Science.gov (United States)

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  5. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    Science.gov (United States)

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  6. Toddlers' sensitivity to within-word coarticulation during spoken word recognition: Developmental differences in lexical competition.

    Science.gov (United States)

    Zamuner, Tania S; Moore, Charlotte; Desmeules-Trudel, Félix

    2016-12-01

    To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    OpenAIRE

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2011-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were la...

  8. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    data of the corpus and includes more formal audio material (lectures, TV and ... meticulous word-class tagging of nouns, adjectives, verbs etc., this book is not limited to word ... Fréquences d'utilisation des mots en français écrit contemporain.

  9. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    Science.gov (United States)

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  10. Allophones, not phonemes in spoken-word recognition

    NARCIS (Netherlands)

    Mitterer, H.A.; Reinisch, E.; McQueen, J.M.

    2018-01-01

    What are the phonological representations that listeners use to map information about the segmental content of speech onto the mental lexicon during spoken-word recognition? Recent evidence from perceptual-learning paradigms seems to support (context-dependent) allophones as the basic

  11. The Impact of Orthographic Consistency on German Spoken Word Identification

    Science.gov (United States)

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…

  12. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    Science.gov (United States)

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  13. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    Science.gov (United States)

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  14. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  15. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  16. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    gogy. Cambridge: Cambridge University Press. Cowie, A.P. 1998. Phraseology: Theory, Analysis, Applications. Oxford: The Clarendon Press. Cowie, A.P. 1999. English Dictionaries for Foreign Learners. Oxford: The Clarendon Press. Read, J. and M. Ambrose, M. 1998. Towards a Multilingual Dictionary of Academic Words.

  17. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    Science.gov (United States)

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  18. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    Science.gov (United States)

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  19. Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors

    Science.gov (United States)

    Kalyuga, Slava

    2012-01-01

    Spoken words have always been an important component of traditional instruction. With the development of modern educational technology tools, spoken text more often replaces or supplements written or on-screen textual representations. However, there could be a cognitive load cost involved in this trend, as spoken words can have both benefits and…

  20. Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach

    OpenAIRE

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy

    2012-01-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual varia...

  1. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    OpenAIRE

    Jesse, A.; McQueen, J.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker...

  2. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2012-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836

  4. Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.

    Science.gov (United States)

    Hunter, Cynthia R; Pisoni, David B

    Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low

  5. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. The role of grammatical category information in spoken word retrieval.

    Science.gov (United States)

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production.

  7. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  8. The time course of lexical competition during spoken word recognition in Mandarin Chinese: an event-related potential study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen

    2016-01-20

    The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.

  9. The gender congruency effect during bilingual spoken-word recognition

    Science.gov (United States)

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  10. Conducting spoken word recognition research online: Validation and a new timing method.

    Science.gov (United States)

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  11. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  12. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. "Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson

    Science.gov (United States)

    Xerri, Daniel

    2016-01-01

    Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…

  14. "A Unified Poet Alliance": The Personal and Social Outcomes of Youth Spoken Word Poetry Programming

    Science.gov (United States)

    Weinstein, Susan

    2010-01-01

    This article places youth spoken word (YSW) poetry programming within the larger framework of arts education. Drawing primarily on transcripts of interviews with teen poets and adult teaching artists and program administrators, the article identifies specific benefits that participants ascribe to youth spoken word, including the development of…

  15. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  16. Working memory affects older adults' use of context in spoken-word recognition.

    Science.gov (United States)

    Janse, Esther; Jesse, Alexandra

    2014-01-01

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

  17. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  18. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    Directory of Open Access Journals (Sweden)

    Michael Inspector

    Full Text Available OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i All words presented in a set flat monotonous pitch contour (ii Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41, temporal areas (BA 21 22 bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  19. Interaction in Spoken Word Recognition Models: Feedback Helps

    Science.gov (United States)

    Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593

  20. Interaction in Spoken Word Recognition Models: Feedback Helps.

    Science.gov (United States)

    Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  1. Interaction in Spoken Word Recognition Models: Feedback Helps

    Directory of Open Access Journals (Sweden)

    James S. Magnuson

    2018-04-01

    Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  2. The Effects of Listener's Familiarity about a Talker on the Free Recall Task of Spoken Words

    Directory of Open Access Journals (Sweden)

    Chikako Oda

    2011-10-01

    Full Text Available Several recent studies have examined an interaction between talker's acoustic characteristics and spoken word recognition in speech perception and have shown that listener's familiarity about a talker influences an easiness of spoken word processing. The present study examined the effect of listener's familiarity about talkers on the free recall task of words spoken by two talkers. Subjects participated in three conditions of the task: the listener has (1 explicit knowledge, (2 implicit knowledge, and (3 no knowledge of the talker. In condition (1, subjects were familiar with talker's voices and were initially informed whose voices they would hear. In condition (2, subjects were familiar with talkers' voices but were not informed whose voices they would hear. In condition (3, subjects were entirely unfamiliar with talker's voices and were not informed whose voices they would hear. We analyzed the percentage of correct answers and compared these results across three conditions. We will discuss the possibility of whether a listener's knowledge about the individual talker's acoustic characteristics stored in long term memory could reduce the quantity of the cognitive resources required in the verbal information processing.

  3. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  4. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    Science.gov (United States)

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  5. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    Science.gov (United States)

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  6. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  7. An fMRI study of concreteness effects in spoken word recognition.

    Science.gov (United States)

    Roxbury, Tracy; McMahon, Katie; Copland, David A

    2014-09-30

    Evidence for the brain mechanisms recruited when processing concrete versus abstract concepts has been largely derived from studies employing visual stimuli. The tasks and baseline contrasts used have also involved varying degrees of lexical processing. This study investigated the neural basis of the concreteness effect during spoken word recognition and employed a lexical decision task with a novel pseudoword condition. The participants were seventeen healthy young adults (9 females). The stimuli consisted of (a) concrete, high imageability nouns, (b) abstract, low imageability nouns and (c) opaque legal pseudowords presented in a pseudorandomised, event-related design. Activation for the concrete, abstract and pseudoword conditions was analysed using anatomical regions of interest derived from previous findings of concrete and abstract word processing. Behaviourally, lexical decision reaction times for the concrete condition were significantly faster than both abstract and pseudoword conditions and the abstract condition was significantly faster than the pseudoword condition (p word recognition. Significant activity was also elicited by concrete words relative to pseudowords in the left fusiform and left anterior middle temporal gyrus. These findings confirm the involvement of a widely distributed network of brain regions that are activated in response to the spoken recognition of concrete but not abstract words. Our findings are consistent with the proposal that distinct brain regions are engaged as convergence zones and enable the binding of supramodal input.

  8. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  9. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    Science.gov (United States)

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  10. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  11. Orthographic consistency affects spoken word recognition at different grain-sizes

    DEFF Research Database (Denmark)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previo...

  12. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  13. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    Science.gov (United States)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  14. The Effect of Lexical Frequency on Spoken Word Recognition in Young and Older Listeners

    Science.gov (United States)

    Revill, Kathleen Pirog; Spieler, Daniel H.

    2011-01-01

    When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults’ eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age. PMID:21707175

  15. Effects of lexical competition on immediate memory span for spoken words.

    Science.gov (United States)

    Goh, Winston D; Pisoni, David B

    2003-08-01

    Current theories and models of the structural organization of verbal short-term memory are primarily based on evidence obtained from manipulations of features inherent in the short-term traces of the presented stimuli, such as phonological similarity. In the present study, we investigated whether properties of the stimuli that are not inherent in the short-term traces of spoken words would affect performance in an immediate memory span task. We studied the lexical neighbourhood properties of the stimulus items, which are based on the structure and organization of words in the mental lexicon. The experiments manipulated lexical competition by varying the phonological neighbourhood structure (i.e., neighbourhood density and neighbourhood frequency) of the words on a test list while controlling for word frequency and intra-set phonological similarity (family size). Immediate memory span for spoken words was measured under repeated and nonrepeated sampling procedures. The results demonstrated that lexical competition only emerged when a nonrepeated sampling procedure was used and the participants had to access new words from their lexicons. These findings were not dependent on individual differences in short-term memory capacity. Additional results showed that the lexical competition effects did not interact with proactive interference. Analyses of error patterns indicated that item-type errors, but not positional errors, were influenced by the lexical attributes of the stimulus items. These results complement and extend previous findings that have argued for separate contributions of long-term knowledge and short-term memory rehearsal processes in immediate verbal serial recall tasks.

  16. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  17. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  18. Discourse context and the recognition of reduced and canonical spoken words

    OpenAIRE

    Brouwer, S.; Mitterer, H.; Huettig, F.

    2013-01-01

    In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" ...

  19. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  20. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  1. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  2. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Interference Effects on the Recall of Pictures, Printed Words and Spoken Words.

    Science.gov (United States)

    Burton, John K.; Bruning, Roger H.

    Thirty college undergraduates participated in a study of the effects of acoustic and visual interference on the recall of word and picture triads in both short-term and long-term memory. The subjects were presented 24 triads of monosyllabic nouns representing all of the possible combinations of presentation types: pictures, printed words, and…

  4. Development of brain networks involved in spoken word processing of Mandarin Chinese.

    Science.gov (United States)

    Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R

    2011-08-01

    Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.

  5. Competition in the perception of spoken Japanese words

    NARCIS (Netherlands)

    Otake, T.; McQueen, J.M.; Cutler, A.

    2010-01-01

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors,

  6. Attention demands of spoken word planning: A review

    Directory of Open Access Journals (Sweden)

    Ardi eRoelofs

    2011-11-01

    Full Text Available Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot proceed without paying some form of attention. Here, we review evidence that word planning requires some but not full attention. The evidence comes from chronometric studies of word planning in picture naming and word reading under divided attention conditions. It is generally assumed that the central attention demands of a process are indexed by the extent that the process delays the performance of a concurrent unrelated task. The studies measured the speed and accuracy of linguistic and nonlinguistic responding as well as eye gaze durations reflecting the allocation of attention. First, empirical evidence indicates that in several task situations, processes up to and including phonological encoding in word planning delay, or are delayed by, the performance of concurrent unrelated nonlinguistic tasks. These findings suggest that word planning requires central attention. Second, empirical evidence indicates that conflicts in word planning may be resolved while concurrently performing an unrelated nonlinguistic task, making a task decision, or making a go/no-go decision. These findings suggest that word planning does not require full central attention. We outline a computationally implemented theory of attention and word planning, and describe at various points the outcomes of computer simulations that demonstrate the utility of the theory in accounting for the key findings. Finally, we indicate how attention deficits may contribute to impaired language performance, such as in individuals with specific language impairment.

  7. Modulating the Focus of Attention for Spoken Words at Encoding Affects Frontoparietal Activation for Incidental Verbal Memory

    OpenAIRE

    Christensen, Thomas A.; Almryde, Kyle R.; Fidler, Lesley J.; Lockwood, Julie L.; Antonucci, Sharon M.; Plante, Elena

    2012-01-01

    Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as ...

  8. Intentional and Reactive Inhibition during Spoken-Word Stroop Task Performance in People with Aphasia

    Science.gov (United States)

    Pompon, Rebecca Hunting; McNeil, Malcolm R.; Spencer, Kristie A.; Kendall, Diane L.

    2015-01-01

    Purpose: The integrity of selective attention in people with aphasia (PWA) is currently unknown. Selective attention is essential for everyday communication, and inhibition is an important part of selective attention. This study explored components of inhibition--both intentional and reactive inhibition--during spoken-word production in PWA and in…

  9. Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder

    Science.gov (United States)

    Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay

    2014-01-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…

  10. Guide to Spoken-Word Recordings: Popular Literature. Reference Circular No. 95-01.

    Science.gov (United States)

    Library of Congress, Washington, DC. National Library Service for the Blind and Physically Handicapped.

    This reference circular contains selected sources for the purchase, rental, or loan of fiction and nonfiction spoken-word recordings. The sources in sections 1, 2, and 3 are commercial and, unless otherwise noted, offer abridged and unabridged titles on audio cassette. Sources in section 1 make available popular fiction; classics; poetry; drama;…

  11. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    Science.gov (United States)

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  12. The socially weighted encoding of spoken words: a dual-route approach to speech perception.

    Science.gov (United States)

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B

    2013-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  13. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    Science.gov (United States)

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  14. Spoken word production: A theory of lexical access

    NARCIS (Netherlands)

    Levelt, W.J.M.

    2001-01-01

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker's focusing on a target concept and ending with the initiation of articulation. The initial

  15. Attention demands of spoken word planning: A review

    NARCIS (Netherlands)

    Roelofs, A.P.A.; Piai, V.

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot

  16. Event-related potentials reflecting the frequency of unattended spoken words

    DEFF Research Database (Denmark)

    Shtyrov, Yury; Kimppa, Lilli; Pulvermüller, Friedemann

    2011-01-01

    , in passive non-attend conditions, with acoustically matched high- and low-frequency words along with pseudo-words. Using factorial and correlation analyses, we found that already at ~120 ms after the spoken stimulus information was available, amplitude of brain responses was modulated by the words' lexical...... for the most frequent word stimuli, later-on (~270 ms), a more global lexicality effect with bilateral perisylvian sources was found for all stimuli, suggesting faster access to more frequent lexical entries. Our results support the account of word memory traces as interconnected neuronal circuits, and suggest......How are words represented in the human brain and can these representations be qualitatively assessed with respect to their structure and properties? Recent research demonstrates that neurophysiological signatures of individual words can be measured when subjects do not focus their attention...

  17. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  18. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Science.gov (United States)

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  19. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  20. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-05-16

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements.

  1. Spectrotemporal processing drives fast access to memory traces for spoken words.

    Science.gov (United States)

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Children's Spoken Word Recognition and Contributions to Phonological Awareness and Nonword Repetition: A 1-Year Follow-Up

    Science.gov (United States)

    Metsala, Jamie L.; Stavrinos, Despina; Walley, Amanda C.

    2009-01-01

    This study examined effects of lexical factors on children's spoken word recognition across a 1-year time span, and contributions to phonological awareness and nonword repetition. Across the year, children identified words based on less input on a speech-gating task. For word repetition, older children improved for the most familiar words. There…

  3. The Self-Organization of a Spoken Word

    Science.gov (United States)

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  4. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    Science.gov (United States)

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  6. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  8. The influence of talker and foreign-accent variability on spoken word identification.

    Science.gov (United States)

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  9. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  10. Why not model spoken word recognition instead of phoneme monitoring?

    NARCIS (Netherlands)

    Vroomen, J.; de Gelder, B.

    2000-01-01

    Norris, McQueen & Cutler present a detailed account of the decision stage of the phoneme monitoring task. However, we question whether this contributes to our understanding of the speech recognition process itself, and we fail to see why phonotactic knowledge is playing a role in phoneme

  11. Long-term temporal tracking of speech rate affects spoken-word recognition.

    Science.gov (United States)

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  12. An fMRI study of concreteness effects during spoken word recognition in aging. Preservation or attenuation?

    Directory of Open Access Journals (Sweden)

    Tracy eRoxbury

    2016-01-01

    Full Text Available It is unclear whether healthy aging influences concreteness effects (ie. the processing advantage seen for concrete over abstract words and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete versus abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing.

  13. The influence of orthographic experience on the development of phonological preparation in spoken word production.

    Science.gov (United States)

    Li, Chuchu; Wang, Min

    2017-08-01

    Three sets of experiments using the picture naming tasks with the form preparation paradigm investigated the influence of orthographic experience on the development of phonological preparation unit in spoken word production in native Mandarin-speaking children. Participants included kindergarten children who have not received formal literacy instruction, Grade 1 children who are comparatively more exposed to the alphabetic pinyin system and have very limited Chinese character knowledge, Grades 2 and 4 children who have better character knowledge and more exposure to characters, and skilled adult readers who have the most advanced character knowledge and most exposure to characters. Only Grade 1 children showed the form preparation effect in the same initial consonant condition (i.e., when a list of target words shared the initial consonant). Both Grade 4 children and adults showed the preparation effect when the initial syllable (but not tone) among target words was shared. Kindergartners and Grade 2 children only showed the preparation effect when the initial syllable including tonal information was shared. These developmental changes in phonological preparation could be interpreted as a joint function of the modification of phonological representation and attentional shift. Extensive pinyin experience encourages speakers to attend to and select onset phoneme in phonological preparation, whereas extensive character experience encourages speakers to prepare spoken words in syllables.

  14. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  15. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  16. Modulating the Focus of Attention for Spoken Words at Encoding Affects Frontoparietal Activation for Incidental Verbal Memory

    Directory of Open Access Journals (Sweden)

    Thomas A. Christensen

    2012-01-01

    Full Text Available Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as listeners heard a list of single English nouns. We then presented these words again in the context of a recognition task and assessed the effect of modulating attention at encoding on the BOLD responses to words that were either attended strongly, weakly, or not heard previously. MRI revealed activity in right-lateralized inferior parietal and prefrontal regions, and positive BOLD signals varied with the relative level of attention present at encoding. Temporal analysis of hemodynamic responses further showed that the time course of BOLD activity was modulated differentially by unintentionally encoded words compared to novel items. Our findings largely support current models of memory consolidation and retrieval, but they also provide fresh evidence for hemispheric differences and functional subdivisions in right frontoparietal attention networks that help shape auditory episodic recall.

  17. Modulating the focus of attention for spoken words at encoding affects frontoparietal activation for incidental verbal memory.

    Science.gov (United States)

    Christensen, Thomas A; Almryde, Kyle R; Fidler, Lesley J; Lockwood, Julie L; Antonucci, Sharon M; Plante, Elena

    2012-01-01

    Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as listeners heard a list of single English nouns. We then presented these words again in the context of a recognition task and assessed the effect of modulating attention at encoding on the BOLD responses to words that were either attended strongly, weakly, or not heard previously. MRI revealed activity in right-lateralized inferior parietal and prefrontal regions, and positive BOLD signals varied with the relative level of attention present at encoding. Temporal analysis of hemodynamic responses further showed that the time course of BOLD activity was modulated differentially by unintentionally encoded words compared to novel items. Our findings largely support current models of memory consolidation and retrieval, but they also provide fresh evidence for hemispheric differences and functional subdivisions in right frontoparietal attention networks that help shape auditory episodic recall.

  18. The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition

    Science.gov (United States)

    Scharenborg, Odette; Coumans, Juul M. J.; van Hout, Roeland

    2018-01-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on…

  19. Spoken Word Recognition and Serial Recall of Words from Components in the Phonological Network

    Science.gov (United States)

    Siew, Cynthia S. Q.; Vitevitch, Michael S.

    2016-01-01

    Network science uses mathematical techniques to study complex systems such as the phonological lexicon (Vitevitch, 2008). The phonological network consists of a "giant component" (the largest connected component of the network) and "lexical islands" (smaller groups of words that are connected to each other, but not to the giant…

  20. "Poetry Is Not a Special Club": How Has an Introduction to the Secondary Discourse of Spoken Word Made Poetry a Memorable Learning Experience for Young People?

    Science.gov (United States)

    Dymoke, Sue

    2017-01-01

    This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…

  1. The role of visual representations during the lexical access of spoken words.

    Science.gov (United States)

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?

    Directory of Open Access Journals (Sweden)

    Martine Coene

    2015-01-01

    Full Text Available This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient’s hearing impairment, to predict a patient’s gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination.

  3. The Spoken Word, the Book and the Image in the Work of Evangelization

    Directory of Open Access Journals (Sweden)

    Jerzy Strzelczyk

    2017-06-01

    Full Text Available Little is known about the ‘material’ equipment of the early missionaries who set out to evangelize pagans and apostates, since the authors of the sources focused mainly on the successes (or failures of the missions. Information concerning the ‘infrastructure’ of missions is rather occasional and of fragmentary nature. The major part in the process of evangelization must have been played by the spoken word preached indirectly or through an interpreter, at least in the areas and milieus remote from the centers of ancient civilization. It could not have been otherwise when coming into contact with communities which did not know the art of reading, still less writing. A little more attention is devoted to the other two media, that is, the written word and the images. The significance of the written word was manifold, and – at least as the basic liturgical books are concerned (the missal, the evangeliary? – the manuscripts were indispensable elements of missionaries’ equipment. In certain circumstances the books which the missionaries had at their disposal could acquire special – even magical – significance, the most comprehensible to the Christianized people (the examples given: the evangeliary of St. Winfried-Boniface in the face of death at the hands of a pagan Frisian, the episode with a manuscript in the story of Anskar’s mission written by Rimbert. The role of the plastic art representations (images during the missions is much less frequently mentioned in the sources. After quoting a few relevant examples (Bede the Venerable, Ermoldus Nigellus, Paul the Deacon, Thietmar of Merseburg, the author also cites an interesting, although not entirely successful, attempt to use drama to instruct the Livonians in the faith while converting them to Christianity, which was reported by Henry of Latvia.

  4. A connectionist model for the simulation of human spoken-word recognition

    NARCIS (Netherlands)

    Kuijk, D.J. van; Wittenburg, P.; Dijkstra, A.F.J.; Den Brinker, B.P.L.M.; Beek, P.J.; Brand, A.N.; Maarse, F.J.; Mulder, L.J.M.

    1999-01-01

    A new psycholinguistically motivated and neural network base model of human word recognition is presented. In contrast to earlier models it uses real speech as input. At the word layer acoustical and temporal information is stored by sequences of connected sensory neurons that pass on sensor

  5. A dual contribution to the involuntary semantic processing of unexpected spoken words.

    Science.gov (United States)

    Parmentier, Fabrice B R; Turner, Jacqueline; Perez, Laura

    2014-02-01

    Sounds are a major cause of distraction. Unexpected to-be-ignored auditory stimuli presented in the context of an otherwise repetitive acoustic background ineluctably break through selective attention and distract people from an unrelated visual task (deviance distraction). This involuntary capture of attention by deviant sounds has been hypothesized to trigger their semantic appraisal and, in some circumstances, interfere with ongoing performance, but it remains unclear how such processing compares with the automatic processing of distractors in classic interference tasks (e.g., Stroop, flanker, Simon tasks). Using a cross-modal oddball task, we assessed the involuntary semantic processing of deviant sounds in the presence and absence of deviance distraction. The results revealed that some involuntary semantic analysis of spoken distractors occurs in the absence of deviance distraction but that this processing is significantly greater in its presence. We conclude that the automatic processing of spoken distractors reflects 2 contributions, one that is contingent upon deviance distraction and one that is independent from it.

  6. Visual information constrains early and late stages of spoken-word recognition in sentence context.

    Science.gov (United States)

    Brunellière, Angèle; Sánchez-García, Carolina; Ikumi, Nara; Soto-Faraco, Salvador

    2013-07-01

    Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Engaging Minority Youth in Diabetes Prevention Efforts Through a Participatory, Spoken-Word Social Marketing Campaign.

    Science.gov (United States)

    Rogers, Elizabeth A; Fine, Sarah C; Handley, Margaret A; Davis, Hodari B; Kass, James; Schillinger, Dean

    2017-07-01

    To examine the reach, efficacy, and adoption of The Bigger Picture, a type 2 diabetes (T2DM) social marketing campaign that uses spoken-word public service announcements (PSAs) to teach youth about socioenvironmental conditions influencing T2DM risk. A nonexperimental pilot dissemination evaluation through high school assemblies and a Web-based platform were used. The study took place in San Francisco Bay Area high schools during 2013. In the study, 885 students were sampled from 13 high schools. A 1-hour assembly provided data, poet performances, video PSAs, and Web-based platform information. A Web-based platform featured the campaign Web site and social media. Student surveys preassembly and postassembly (knowledge, attitudes), assembly observations, school demographics, counts of Web-based utilization, and adoption were measured. Descriptive statistics, McNemar's χ 2 test, and mixed modeling accounting for clustering were used to analyze data. The campaign included 23 youth poet-created PSAs. It reached >2400 students (93% self-identified non-white) through school assemblies and has garnered >1,000,000 views of Web-based video PSAs. School participants demonstrated increased short-term knowledge of T2DM as preventable, with risk driven by socioenvironmental factors (34% preassembly identified environmental causes as influencing T2DM risk compared to 83% postassembly), and perceived greater personal salience of T2DM risk reduction (p < .001 for all). The campaign has been adopted by regional public health departments. The Bigger Picture campaign showed its potential for reaching and engaging diverse youth. Campaign messaging is being adopted by stakeholders.

  8. Distinct patterns of brain activity characterise lexical activation and competition in spoken word production.

    Directory of Open Access Journals (Sweden)

    Vitória Piai

    Full Text Available According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog with distractor words. The distractor and picture name were semantically related (cat, unrelated (pin, or identical (dog. Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350-650 ms (4-10 Hz in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.

  9. The role of visual representations within working memory for paired-associate and serial order of spoken words.

    Science.gov (United States)

    Ueno, Taiji; Saito, Satoru

    2013-09-01

    Caplan and colleagues have recently explained paired-associate learning and serial-order learning with a single-mechanism computational model by assuming differential degrees of isolation. Specifically, two items in a pair can be grouped together and associated to positional codes that are somewhat isolated from the rest of the items. In contrast, the degree of isolation among the studied items is lower in serial-order learning. One of the key predictions drawn from this theory is that any variables that help chunking of two adjacent items into a group should be beneficial to paired-associate learning, more than serial-order learning. To test this idea, the role of visual representations in memory for spoken verbal materials (i.e., imagery) was compared between two types of learning directly. Experiment 1 showed stronger effects of word concreteness and of concurrent presentation of irrelevant visual stimuli (dynamic visual noise: DVN) in paired-associate memory than in serial-order memory, consistent with the prediction. Experiment 2 revealed that the irrelevant visual stimuli effect was boosted when the participants had to actively maintain the information within working memory, rather than feed it to long-term memory for subsequent recall, due to cue overloading. This indicates that the sensory input from irrelevant visual stimuli can reach and affect visual representations of verbal items within working memory, and that this disruption can be attenuated when the information within working memory can be efficiently supported by long-term memory for subsequent recall.

  10. Grasp it loudly! Supporting actions with semantically congruent spoken action words.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I. Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II. For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action ("grasp" and "put down" or unrelated words (e.g. "butterfly" and "pigeon". Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action language and motor structures are significant to the point that language can refine overt motor behaviour.

  11. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    NARCIS (Netherlands)

    Jesse, A.; McQueen, J.M.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes

  12. Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared. and they manually responded to arrows presented away from (Experiment

  13. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    Science.gov (United States)

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Effects of prosody on spoken Thai word perception in pre-attentive brain processing: a pilot study

    Directory of Open Access Journals (Sweden)

    Kittipun Arunphalungsanti

    2016-12-01

    Full Text Available This study aimed to investigate the effect of the unfamiliar stressed prosody on spoken Thai word perception in the pre-attentive processing of the brain evaluated by the N2a and brain wave oscillatory activity. EEG recording was obtained from eleven participants, who were instructed to ignore the sound stimuli while watching silent movies. Results showed that prosody of unfamiliar stress word perception elicited N2a component and the quantitative EEG analysis found that theta and delta wave powers were principally generated in the frontal area. It was possible that the unfamiliar prosody with different frequencies, duration and intensity of the sound of Thai words induced highly selective attention and retrieval of information from the episodic memory of the pre-attentive stage of speech perception. This brain electrical activity evidence could be used for further study in the development of valuable clinical tests to evaluate the frontal lobe function in speech perception.

  15. Two-year-olds' sensitivity to subphonemic mismatch during online spoken word recognition.

    Science.gov (United States)

    Paquette-Smith, Melissa; Fecher, Natalie; Johnson, Elizabeth K

    2016-11-01

    Sensitivity to noncontrastive subphonemic detail plays an important role in adult speech processing, but little is known about children's use of this information during online word recognition. In two eye-tracking experiments, we investigate 2-year-olds' sensitivity to a specific type of subphonemic detail: coarticulatory mismatch. In Experiment 1, toddlers viewed images of familiar objects (e.g., a boat and a book) while hearing labels containing appropriate or inappropriate coarticulation. Inappropriate coarticulation was created by cross-splicing the coda of the target word onto the onset of another word that shared the same onset and nucleus (e.g., to create boat, the final consonant of boat was cross-spliced onto the initial CV of bone). We tested 24-month-olds and 29-month-olds in this paradigm. Both age groups behaved similarly, readily detecting the inappropriate coarticulation (i.e., showing better recognition of identity-spliced than cross-spliced items). In Experiment 2, we asked how children's sensitivity to subphonemic mismatch compared to their sensitivity to phonemic mismatch. Twenty-nine-month-olds were presented with targets that contained either a phonemic (e.g., the final consonant of boat was spliced onto the initial CV of bait) or a subphonemic mismatch (e.g., the final consonant of boat was spliced onto the initial CV of bone). Here, the subphonemic (coarticulatory) mismatch was not nearly as disruptive to children's word recognition as a phonemic mismatch. Taken together, our findings support the view that 2-year-olds, like adults, use subphonemic information to optimize online word recognition.

  16. From the Spoken Word to Video: Orality, Literacy, Mediated Orality, and the Amazigh (Berber Cultural Production

    Directory of Open Access Journals (Sweden)

    Daniela Merolla

    2005-08-01

    Full Text Available This article presents new directions in Tamazight/Berber artistic productions. The development of theatre, films and videos in Tamazight are set in the framework of the historical and literary background in the Maghreb and in the lands of Amazigh Diaspora.  It also includes the interview with the video-maker and director Agouram Salout. Key Words: tamazight, berber, theatre, videos, film, taqbaylit, tarifit, tachelhit, new cultural production, writing, orality

  17. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  18. The Influence of the Phonological Neighborhood Clustering Coefficient on Spoken Word Recognition

    Science.gov (United States)

    Chan, Kit Ying; Vitevitch, Michael S.

    2009-01-01

    Clustering coefficient--a measure derived from the new science of networks--refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words "bat", "hat", and "can", all of which are neighbors of the word "cat"; the words "bat" and…

  19. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    Science.gov (United States)

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  20. Vocabulary Learning in a Yorkshire Terrier: Slow Mapping of Spoken Words

    Science.gov (United States)

    Griebel, Ulrike; Oller, D. Kimbrough

    2012-01-01

    Rapid vocabulary learning in children has been attributed to “fast mapping”, with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico) not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey) with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before) embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion. PMID:22363421

  1. Vocabulary learning in a Yorkshire terrier: slow mapping of spoken words.

    Directory of Open Access Journals (Sweden)

    Ulrike Griebel

    Full Text Available Rapid vocabulary learning in children has been attributed to "fast mapping", with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion.

  2. Stimulus variability and the phonetic relevance hypothesis: effects of variability in speaking style, fundamental frequency, and speaking rate on spoken word identification.

    Science.gov (United States)

    Sommers, Mitchell S; Barcroft, Joe

    2006-04-01

    Three experiments were conducted to examine the effects of trial-to-trial variations in speaking style, fundamental frequency, and speaking rate on identification of spoken words. In addition, the experiments investigated whether any effects of stimulus variability would be modulated by phonetic confusability (i.e., lexical difficulty). In Experiment 1, trial-to-trial variations in speaking style reduced the overall identification performance compared with conditions containing no speaking-style variability. In addition, the effects of variability were greater for phonetically confusable words than for phonetically distinct words. In Experiment 2, variations in fundamental frequency were found to have no significant effects on spoken word identification and did not interact with lexical difficulty. In Experiment 3, two different methods for varying speaking rate were found to have equivalent negative effects on spoken word recognition and similar interactions with lexical difficulty. Overall, the findings are consistent with a phonetic-relevance hypothesis, in which accommodating sources of acoustic-phonetic variability that affect phonetically relevant properties of speech signals can impair spoken word identification. In contrast, variability in parameters of the speech signal that do not affect phonetically relevant properties are not expected to affect overall identification performance. Implications of these findings for the nature and development of lexical representations are discussed.

  3. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words

    NARCIS (Netherlands)

    Takashima, A.; Bakker, I.; Hell, J.G. van; Janzen, G.; McQueen, J.M.

    2017-01-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and

  4. Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children

    Directory of Open Access Journals (Sweden)

    Mélanie Havy

    2017-12-01

    Full Text Available From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face. Another group experienced the words in the visual modality (seeing a silent talking face. Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.

  5. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words.

    Science.gov (United States)

    Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M

    2017-04-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Tracing attention and the activation flow in spoken word planning using eye-movements

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements

  7. Distinct Patterns of Brain Activity Characterise Lexical Activation and Competition in Spoken Word Production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Jensen, O.; Schoffelen, J.M.; Bonnefond, M.

    2014-01-01

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography

  8. Tracing Attention and the Activation Flow of Spoken Word Planning Using Eye Movements

    Science.gov (United States)

    Roelofs, Ardi

    2008-01-01

    The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The…

  9. Children show right-lateralized effects of spoken word-form learning.

    Directory of Open Access Journals (Sweden)

    Anni Nora

    Full Text Available It is commonly thought that phonological learning is different in young children compared to adults, possibly due to the speech processing system not yet having reached full native-language specialization. However, the neurocognitive mechanisms of phonological learning in children are poorly understood. We employed magnetoencephalography (MEG to track cortical correlates of incidental learning of meaningless word forms over two days as 6-8-year-olds overtly repeated them. Native (Finnish pseudowords were compared with words of foreign sound structure (Korean to investigate whether the cortical learning effects would be more dependent on previous proficiency in the language rather than maturational factors. Half of the items were encountered four times on the first day and once more on the following day. Incidental learning of these recurring word forms manifested as improved repetition accuracy and a correlated reduction of activation in the right superior temporal cortex, similarly for both languages and on both experimental days, and in contrast to a salient left-hemisphere emphasis previously reported in adults. We propose that children, when learning new word forms in either native or foreign language, are not yet constrained by left-hemispheric segmental processing and established sublexical native-language representations. Instead, they may rely more on supra-segmental contours and prosody.

  10. Lexical Tone Variation and Spoken Word Recognition in Preschool Children: Effects of Perceptual Salience

    Science.gov (United States)

    Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.

    2017-01-01

    Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…

  11. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    Directory of Open Access Journals (Sweden)

    Vitoria ePiai

    2013-12-01

    Full Text Available Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI; vocal colour naming while ignoring distractors (Stroop; and manual object discrimination while ignoring spatial position (Simon task. All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus. Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category relative to incongruent (categorically related and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the anterior cingulate cortex, a region that is likely implementing domain

  12. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  13. WORD-MAKING IN PRESENT-DAY ENGLISH.

    Science.gov (United States)

    SIMONINI, R.C., JR.

    WORDS CAN BE STUDIED BY DESCRIBING THEIR ORIGIN INDUCTIVELY OR DEDUCTIVELY. EITHER WAY, A PRECISE DEFINITION OF ETYMOLOGICAL CLASSES WHICH ARE MUTUALLY EXCLUSIVE IS NEEDED. PRESENT-DAY ENGLISH IS CLASSIFIED INTO--(1) NATIVE WORDS WHICH CAN BE TRACED BACK TO THE WORD STOCK OF OLD ENGLISH, (2) LOAN WORDS NEW TO THE ENGLISH LANGUAGE WHICH HAD…

  14. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Research note: exceptional absolute pitch perception for spoken words in an able adult with autism.

    Science.gov (United States)

    Heaton, Pamela; Davis, Robert E; Happé, Francesca G E

    2008-01-01

    Autism is a neurodevelopmental disorder, characterised by deficits in socialisation and communication, with repetitive and stereotyped behaviours [American Psychiatric Association (1994). Diagnostic and statistical manual for mental disorders (4th ed.). Washington, DC: APA]. Whilst intellectual and language impairment is observed in a significant proportion of diagnosed individuals [Gillberg, C., & Coleman, M. (2000). The biology of the autistic syndromes (3rd ed.). London: Mac Keith Press; Klinger, L., Dawson, G., & Renner, P. (2002). Autistic disorder. In E. Masn, & R. Barkley (Eds.), Child pyschopathology (2nd ed., pp. 409-454). New York: Guildford Press], the disorder is also strongly associated with the presence of highly developed, idiosyncratic, or savant skills [Heaton, P., & Wallace, G. (2004) Annotation: The savant syndrome. Journal of Child Psychology and Psychiatry, 45 (5), 899-911]. We tested identification of fundamental pitch frequencies in complex tones, sine tones and words in AC, an intellectually able man with autism and absolute pitch (AP) and a group of healthy controls with self-reported AP. The analysis showed that AC's naming of speech pitch was highly superior in comparison to controls. The results suggest that explicit access to perceptual information in speech is retained to a significantly higher degree in autism.

  16. Attentional Capture of Objects Referred to by Spoken Language

    Science.gov (United States)

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  17. Roy Reider (1914-1979) selections from his written and spoken words

    International Nuclear Information System (INIS)

    Paxton, H.C.

    1980-01-01

    Comments by Roy Reider on chemical criticality control, the fundamentals of safety, policy and responsibility, on written procedures, profiting from accidents, safety training, early history of criticality safety, requirements for the possible, the value of enlightened challenge, public acceptance of a new risk, and on prophets of doom are presented

  18. Pupils' Knowledge and Spoken Literary Response beyond Polite Meaningless Words: Studying Yeats's "Easter, 1916"

    Science.gov (United States)

    Gordon, John

    2016-01-01

    This article presents research exploring the knowledge pupils bring to texts introduced to them for literary study, how they share knowledge through talk, and how it is elicited by the teacher in the course of an English lesson. It sets classroom discussion in a context where new examination requirements diminish the relevance of social, cultural…

  19. Retrieval activates related words more than presentation.

    Science.gov (United States)

    Hausman, Hannah; Rhodes, Matthew G

    2018-03-23

    Retrieving information enhances learning more than restudying. One explanation of this effect is based on the role of mediators (e.g., sand-castle can be mediated by beach). Retrieval is hypothesised to activate mediators more than restudying, but existing tests of this hypothesis have had mixed results [Carpenter, S. K. (2011). Semantic information activated during retrieval contributes to later retention: Support for the mediator effectiveness hypothesis of the testing effect. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6), 1547-1552. doi: 10.1037/a0024140 ; Lehman, M., & Karpicke, J. D. (2016). Elaborative retrieval: Do semantic mediators improve memory? Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(10), 1573-1591. doi: 10.1037/xlm0000267 ]. The present experiments explored an explanation of the conflicting results, testing whether mediator activation during a retrieval attempt depends on the accessibility of the target information. A target was considered less versus more accessible when fewer versus more cues were given during retrieval practice (Experiments 1 and 2), when the target had been studied once versus three times initially (Experiment 3), or when the target could not be recalled versus could be recalled during retrieval practice (Experiments 1-3). A mini meta-analysis of all three experiments revealed a small effect such that retrieval activated mediators more than presentation, but mediator activation was not reliably related to target accessibility. Thus, retrieval may enhance learning by activating mediators, in part, but these results suggest the role of other processes, too.

  20. Crossmodal Activation of Visual Object Regions for Auditorily Presented Concrete Words

    Directory of Open Access Journals (Sweden)

    Jasper J F van den Bosch

    2011-10-01

    Full Text Available Dual-coding theory (Paivio, 1986 postulates that the human mind represents objects not just with an analogous, or semantic code, but with a perceptual representation as well. Previous studies (eg, Fiebach & Friederici, 2004 indicated that the modality of this representation is not necessarily the one that triggers the representation. The human visual cortex contains several regions, such as the Lateral Occipital Complex (LOC, that respond specifically to object stimuli. To investigate whether these principally visual representations regions are also recruited for auditory stimuli, we presented subjects with spoken words with specific, concrete meanings (‘car’ as well as words with abstract meanings (‘hope’. Their brain activity was measured with functional magnetic resonance imaging. Whole-brain contrasts showed overlap between regions differentially activated by words for concrete objects compared to words for abstract concepts with visual regions activated by a contrast of object versus non-object visual stimuli. We functionally localized LOC for individual subjects and a preliminary analysis showed a trend for a concreteness effect in this region-of-interest on the group level. Appropriate further analysis might include connectivity and classification measures. These results can shed light on the role of crossmodal representations in cognition.

  1. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  2. Presentation of words to separate hemispheres prevents interword illusory conjunctions.

    Science.gov (United States)

    Liederman, J; Sohn, Y S

    1999-03-01

    We tested the hypothesis that division of inputs between the hemispheres could prevent interword letter migrations in the form of illusory conjunctions. The task was to decide whether a centrally-presented consonant-vowel-consonant (CVC) target word matched one of four CVC words presented to a single hemisphere or divided between the hemispheres in a subsequent test display. During half of the target-absent trials, known as conjunction trials, letters from two separate words (e.g., "tag" and "cop") in the test display could be mistaken for a target word (e.g., "top"). For the other half of the target-absent trails, the test display did not match any target consonants (Experiment 1, N = 16) or it matched one target consonant (Experiment 2, N = 29), the latter constituting true "feature" trials. Bi- as compared to unihemispheric presentation significantly reduced the number of conjunction, but not feature, errors. Illusory conjunctions did not occur when the words were presented to separate hemispheres.

  3. TISK 1.0: An easy-to-use Python implementation of the time-invariant string kernel model of spoken word recognition.

    Science.gov (United States)

    You, Heejo; Magnuson, James S

    2018-04-30

    This article describes a new Python distribution of TISK, the time-invariant string kernel model of spoken word recognition (Hannagan et al. in Frontiers in Psychology, 4, 563, 2013). TISK is an interactive-activation model similar to the TRACE model (McClelland & Elman in Cognitive Psychology, 18, 1-86, 1986), but TISK replaces most of TRACE's reduplicated, time-specific nodes with theoretically motivated time-invariant, open-diphone nodes. We discuss the utility of computational models as theory development tools, the relative merits of TISK as compared to other models, and the ways in which researchers might use this implementation to guide their own research and theory development. We describe a TISK model that includes features that facilitate in-line graphing of simulation results, integration with standard Python data formats, and graph and data export. The distribution can be downloaded from https://github.com/maglab-uconn/TISK1.0 .

  4. Word Spelling Assessment Using ICT: The Effect of Presentation Modality

    Science.gov (United States)

    Sarris, Menelaos; Panagiotakopoulos, Chris

    2010-01-01

    Up-to-date spelling process was assessed using typical spelling-to-dictation tasks, where children's performance was evaluated mainly in terms of spelling error scores. In the present work a simple graphical computer interface is reported, aiming to investigate the effects of input modality (e.g. visual and verbal) in word spelling. The software…

  5. Optimal viewing position in vertically and horizontally presented Japanese words.

    Science.gov (United States)

    Kajii, N; Osaka, N

    2000-11-01

    In the present study, the optimal viewing position (OVP) phenomenon in Japanese Hiragana was investigated, with special reference to a comparison between the vertical and the horizontal meridians in the visual field. In the first experiment, word recognition scores were determined while the eyes were fixating predetermined locations in vertically and horizontally displayed words. Similar to what has been reported for Roman scripts, OVP curves, which were asymmetric with respect to the beginning of words, were observed in both conditions. However, this asymmetry was less pronounced for vertically than for horizontally displayed words. In the second experiment, the visibility of individual characters within strings was examined for the vertical and horizontal meridians. As for Roman characters, letter identification scores were better in the right than in the left visual field. However, identification scores did not differ between the upper and the lower sides of fixation along the vertical meridian. The results showed that the model proposed by Nazir, O'Regan, and Jacobs (1991) cannot entirely account for the OVP phenomenon. A model in which visual and lexical factors are combined is proposed instead.

  6. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  7. Recall of short word lists presented visually at fast rates: effects of phonological similarity and word length.

    Science.gov (United States)

    Coltheart, V; Langdon, R

    1998-03-01

    Phonological similarity of visually presented list items impairs short-term serial recall. Lists of long words are also recalled less accurately than are lists of short words. These results have been attributed to phonological recoding and rehearsal. If subjects articulate irrelevant words during list presentation, both phonological similarity and word length effects are abolished. Experiments 1 and 2 examined effects of phonological similarity and recall instructions on recall of lists shown at fast rates (from one item per 0.114-0.50 sec), which might not permit phonological encoding and rehearsal. In Experiment 3, recall instructions and word length were manipulated using fast presentation rates. Both phonological similarity and word length effects were observed, and they were not dependent on recall instructions. Experiments 4 and 5 investigated the effects of irrelevant concurrent articulation on lists shown at fast rates. Both phonological similarity and word length effects were removed by concurrent articulation, as they were with slow presentation rates.

  8. Spoken Word and Printed Page: G. W. M. Reynolds and ‘The Charing-Cross Revolution’, 1848

    Directory of Open Access Journals (Sweden)

    Mary L. Shannon

    2014-05-01

    theatre of political demonstrations. Trafalgar Square offered Reynolds the possibility that urban space could present the continuation and implementation of radical demands made in print, and could bring radical print vocally to life.

  9. The Presentation of Word Formation in General Monolingual ...

    African Journals Online (AJOL)

    This paper gives suggestions regarding the theoretical approaches that could lead to a better user-directed lexicographic practice. Keywords: Afrikaans dictionaries, cognitive function, complex form, compound, derivative, dictionary function, electronic dictionaries, text production, text reception, user needs, word formation ...

  10. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    Science.gov (United States)

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  11. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  12. Uses of the word "macula" in written English, 1400-present.

    Science.gov (United States)

    Schwartz, Stephen G; Leffler, Christopher T

    2014-01-01

    We compiled uses of the word "macula" in written English by searching multiple databases, including the Early English Books Online Text Creation Partnership, America's Historical Newspapers, the Gale Cengage Collections, and others. "Macula" has been used: as a non-medical "spot" or "stain", literal or figurative, including in astronomy and in Shakespeare; as a medical skin lesion, occasionally with a following descriptive adjective, such as a color (macula alba); as a corneal lesion, including the earliest identified use in English, circa 1400; and to describe the center of the retina. Francesco Buzzi described a yellow color in the posterior pole ("retina tinta di un color giallo") in 1782, but did not use the word "macula". "Macula lutea" was published by Samuel Thomas von Sömmering by 1799, and subsequently used in 1818 by James Wardrop, which appears to be the first known use in English. The Google n-gram database shows a marked increase in the frequencies of both "macula" and "macula lutea" following the introduction of the ophthalmoscope in 1850. "Macula" has been used in multiple contexts in written English. Modern databases provide powerful tools to explore historical uses of this word, which may be underappreciated by contemporary ophthalmologists. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Locus of Word Frequency Effects in Spelling to Dictation: Still at the Orthographic Level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-01-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological…

  14. Redundancy Effect on Retention of Vocabulary Words Using Multimedia Presentation

    Science.gov (United States)

    Samur, Yavuz

    2012-01-01

    This study was designed to examine the effect of the redundancy principle in a multimedia presentation constructed for foreign language vocabulary learning on undergraduate students' retention. The underlying hypothesis of this study is that when the students are exposed to the material in multiple ways through animation, concurrent narration,…

  15. Visual half-field presentations of incongruent color words: effects of gender and handedness.

    Science.gov (United States)

    Franzon, M; Hugdahl, K

    1986-09-01

    Right-handed (dextral) and left-handed (sinistral) males and females (N = 15) were compared for language lateralization in a visual half-field (VHF) incongruent color-words paradigm. The paradigm consists of repeated brief (less than 200 msec) presentations of color-words written in an incongruent color. Presentations are either to the right or to the left of center fixation. The task of the subject is to report the color the word is written in on each trial, ignoring the color-word. Color-bars and congruent color-words were used as control stimuli. Vocal reaction time (VRT) and error frequency were used as dependent measures. The logic behind the paradigm is that incongruent color-words should lead to a greater cognitive conflict when presented in the half-field contralateral to the dominant hemisphere. The results showed significantly longer VRTs in the right half-field for the dextral subjects. Furthermore, significantly more errors were observed in the male dextral group when the incongruent stimuli were presented in the right half-field. There was a similar trend in the data for the sinistral males. No differences between half-fields were observed for the female groups. It is concluded that the present results strengthen previous findings from our laboratory (Hugdahl and Franzon, 1985) that the incongruent color-words paradigm is a useful non-invasive technique for the study of lateralization in the intact brain.

  16. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  17. Getting the Word Out: IDRC's Past, Present, and Future discussed at ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2016-04-18

    Apr 18, 2016 ... Getting the Word Out: IDRC's Past, Present, and Future discussed at inaugural ... IDRC President David M. Malone, and others fielded questions from the crowd ... In years to come, IDRC's research priorities will include food ...

  18. Neural Correlates of Word Recognition: A Systematic Comparison of Natural Reading and Rapid Serial Visual Presentation.

    Science.gov (United States)

    Kornrumpf, Benthe; Niefind, Florian; Sommer, Werner; Dimigen, Olaf

    2016-09-01

    Neural correlates of word recognition are commonly studied with (rapid) serial visual presentation (RSVP), a condition that eliminates three fundamental properties of natural reading: parafoveal preprocessing, saccade execution, and the fast changes in attentional processing load occurring from fixation to fixation. We combined eye-tracking and EEG to systematically investigate the impact of all three factors on brain-electric activity during reading. Participants read lists of words either actively with eye movements (eliciting fixation-related potentials) or maintained fixation while the text moved passively through foveal vision at a matched pace (RSVP-with-flankers paradigm, eliciting ERPs). The preview of the upcoming word was manipulated by changing the number of parafoveally visible letters. Processing load was varied by presenting words of varying lexical frequency. We found that all three factors have strong interactive effects on the brain's responses to words: Once a word was fixated, occipitotemporal N1 amplitude decreased monotonically with the amount of parafoveal information available during the preceding fixation; hence, the N1 component was markedly attenuated under reading conditions with preview. Importantly, this preview effect was substantially larger during active reading (with saccades) than during passive RSVP with flankers, suggesting that the execution of eye movements facilitates word recognition by increasing parafoveal preprocessing. Lastly, we found that the N1 component elicited by a word also reflects the lexical processing load imposed by the previously inspected word. Together, these results demonstrate that, under more natural conditions, words are recognized in a spatiotemporally distributed and interdependent manner across multiple eye fixations, a process that is mediated by active motor behavior.

  19. Rethinking spoken fluency

    OpenAIRE

    McCarthy, Michael

    2009-01-01

    This article re-examines the notion of spoken fluency. Fluent and fluency are terms commonly used in everyday, lay language, and fluency, or lack of it, has social consequences. The article reviews the main approaches to understanding and measuring spoken fluency and suggest that spoken fluency is best understood as an interactive achievement, and offers the metaphor of ‘confluence’ to replace the term fluency. Many measures of spoken fluency are internal and monologue-based, whereas evidence...

  20. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  1. Hearing taboo words can result in early talker effects in word recognition for female listeners.

    Science.gov (United States)

    Tuft, Samantha E; MᶜLennan, Conor T; Krestar, Maura L

    2018-02-01

    Previous spoken word recognition research using the long-term repetition-priming paradigm found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks, and the identity of the talker changed reaction times (RTs) were slower than when the repeated words were spoken by the same talker. Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research suggests that increased explicit and implicit attention towards the talkers can result in talker effects even during relatively fast processing. The purpose of the current study was to examine whether word meaning would influence the pattern of talker effects in an easy lexical decision task and, if so, whether results would differ depending on whether the presentation of neutral and taboo words was mixed or blocked. Regardless of presentation, participants responded to taboo words faster than neutral words. Furthermore, talker effects for the female talker emerged when participants heard both taboo and neutral words (consistent with an attention-based hypothesis), but not for participants that heard only taboo or only neutral words (consistent with the time-course hypothesis). These findings have important implications for theoretical models of spoken word recognition.

  2. The word-length effect and disyllabic words.

    Science.gov (United States)

    Lovatt, P; Avons, S E; Masterson, J

    2000-02-01

    Three experiments compared immediate serial recall of disyllabic words that differed on spoken duration. Two sets of long- and short-duration words were selected, in each case maximizing duration differences but matching for frequency, familiarity, phonological similarity, and number of phonemes, and controlling for semantic associations. Serial recall measures were obtained using auditory and visual presentation and spoken and picture-pointing recall. In Experiments 1a and 1b, using the first set of items, long words were better recalled than short words. In Experiments 2a and 2b, using the second set of items, no difference was found between long and short disyllabic words. Experiment 3 confirmed the large advantage for short-duration words in the word set originally selected by Baddeley, Thomson, and Buchanan (1975). These findings suggest that there is no reliable advantage for short-duration disyllables in span tasks, and that previous accounts of a word-length effect in disyllables are based on accidental differences between list items. The failure to find an effect of word duration casts doubt on theories that propose that the capacity of memory span is determined by the duration of list items or the decay rate of phonological information in short-term memory.

  3. Voice reinstatement modulates neural indices of continuous word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Backer, Kristina C; Alain, Claude

    2014-09-01

    The present study was designed to examine listeners' ability to use voice information incidentally during spoken word recognition. We recorded event-related brain potentials (ERPs) during a continuous recognition paradigm in which participants indicated on each trial whether the spoken word was "new" or "old." Old items were presented at 2, 8 or 16 words following the first presentation. Context congruency was manipulated by having the same word repeated by either the same speaker or a different speaker. The different speaker could share the gender, accent or neither feature with the word presented the first time. Participants' accuracy was greatest when the old word was spoken by the same speaker than by a different speaker. In addition, accuracy decreased with increasing lag. The correct identification of old words was accompanied by an enhanced late positivity over parietal sites, with no difference found between voice congruency conditions. In contrast, an earlier voice reinstatement effect was observed over frontal sites, an index of priming that preceded recollection in this task. Our results provide further evidence that acoustic and semantic information are integrated into a unified trace and that acoustic information facilitates spoken word recollection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    Science.gov (United States)

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  5. Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data.

    Science.gov (United States)

    Gow, David W; Olson, Bruna B

    2015-07-01

    Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.

  6. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  7. Presentation format effects in working memory: the role of attention.

    Science.gov (United States)

    Foos, Paul W; Goolkasian, Paula

    2005-04-01

    Four experiments are reported in which participants attempted to remember three or six concrete nouns, presented as pictures, spoken words, or printed words, while also verifying the accuracy of sentences. Hypotheses meant to explain the higher recall of pictures and spoken words over printed words were tested. Increasing the difficulty and changing the type of processing task from arithmetic to a visual/spatial reasoning task did not influence recall. An examination of long-term modality effects showed that those effects were not sufficient to explain the superior performance with spoken words and pictures. Only when we manipulated the allocation of attention to the items in the storage task by requiring the participants to articulate the items and by presenting the stimulus items under a degraded condition were we able to reduce or remove the effect of presentation format. The findings suggest that the better recall of pictures and spoken words over printed words result from the fact that under normal presentation conditions, printed words receive less processing attention than pictures and spoken words do.

  8. Iconic Factors and Language Word Order

    Science.gov (United States)

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  9. Processing spoken lectures in resource-scarce environments

    CSIR Research Space (South Africa)

    Van Heerden, CJ

    2011-11-01

    Full Text Available and then adapting or training new models using the segmented spoken lectures. The eventual systems perform quite well, aligning more than 90% of a selected set of target words successfully....

  10. Autosegmental Representation of Epenthesis in the Spoken French ...

    African Journals Online (AJOL)

    Nneka Umera-Okeke

    ... spoken French of IUFLs. Key words: IUFLs, Epenthensis, Ijebu dialect, Autosegmental phonology .... Ambiguities may result: salmi "strait" vs. salami. (An exception is that in .... tiers of segments. In the picture given us by classical generative.

  11. Beyond Phonotactic Frequency: Presentation Frequency Effects Word Productions in Specific Language Impairment

    Science.gov (United States)

    Plante, Elena; Bahl, Megha; Vance, Rebecca; Gerken, LouAnn

    2011-01-01

    Phonotactic frequency effects on word production are thought to reflect accumulated experience with a language. Here we demonstrate that frequency effects can also be obtained through short-term manipulations of the input to children. We presented children with nonwords in an experiment that systematically manipulated English phonotactic frequency…

  12. Influence of Suboptimally and Optimally Presented Affective Pictures and Words on Consumption-Related Behavior

    Science.gov (United States)

    Winkielman, Piotr; Gogolushko, Yekaterina

    2018-01-01

    Affective stimuli can influence immediate reactions as well as spontaneous behaviors. Much evidence for such influence comes from studies of facial expressions. However, it is unclear whether these effects hold for other affective stimuli, and how the amount of stimulus processing changes the nature of the influence. This paper addresses these issues by comparing the influence on consumption behaviors of emotional pictures and valence-matched words presented at suboptimal and supraliminal durations. In Experiment 1, both suboptimal and supraliminal emotional facial expressions influenced consumption in an affect-congruent, assimilative way. In Experiment 2, pictures of both high- and low-frequency emotional objects congruently influenced consumption. In comparison, words tended to produce incongruent effects. We discuss these findings in light of privileged access theories, which hold that pictures better convey affective meaning than words, and embodiment theories, which hold that pictures better elicit somatosensory and motor responses. PMID:29434556

  13. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    Science.gov (United States)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  14. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. The influence of spelling on phonological encoding in word reading, object naming, and word generation

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2006-01-01

    Does the spelling of a word mandatorily constrain spoken word production, or does it do so only when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling effects in spoken word production in English using a prompt–response word generation task. Preparation

  16. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  17. Spoken Grammar for Chinese Learners

    Institute of Scientific and Technical Information of China (English)

    徐晓敏

    2013-01-01

    Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.

  18. Medulloblastoma Presenting With Pure Word Deafness: Report of One Case and Review of Literature

    Directory of Open Access Journals (Sweden)

    Yen-Ting Chou

    2011-10-01

    Full Text Available Pure word deafness (PWD is a rare disorder characterized by impaired verbal comprehension sparing discrimination and recognition of nonverbal sounds with relatively normal spontaneous speech, writing, and reading comprehension. Etiologies of this syndrome are varied, and there are rare reports about brain tumor with PWD in children. We report a case of medulloblastoma presented with PWD in a 7-year-old girl. She visited our outpatient clinic because of English dictation performance deterioration. PWD was diagnosed by the otolaryngologist after examinations. Posterior fossa tumor and obstructive hydrocephalus were shown in the magnetic resonance imaging of the brain. The diagnosis of medulloblastoma was then made by pathology.

  19. Finding words in a language that allows words without vowels

    NARCIS (Netherlands)

    El Aissati, A.; McQueen, J.M.; Cutler, A.

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the

  20. Pre- and postoperative memory of dichotically presented words in patients with complex partial seizures.

    Science.gov (United States)

    Christianson, S A; Nilsson, L G; Silfvenius, H

    1989-01-01

    Dichotic listening tests were used to determine cerebral hemisphere memory functions in patients with complex partial seizures before, 10 days after, and 1-3 yr after right (RTE) or left (LTE) temporal-lobe excisions. Control subjects were also tested on two occasions. The tests consisted of presenting a series of 12-word lists and 7-word lists alternately to the two ears while backward speech was presented to the other ear. Measures of immediate free recall, final free recall, final cued recall, and serial recall were employed. The results revealed: (a) that both groups of patients were inferior the control group in tests tapping long-term memory functions rather than short-term memory functions, (b) a right-ear advantage for RTE patients at postoperative testing, (c) that the LTE group was more affected by surgery than the RTE group, and (d) a general improvement in recall performance from early to late postoperative testing. Taken together, these results indicate that the present dichotic test can be used as a non-invasive hemisphere memory test to complement invasive techniques for diagnosis of patients considered for epilepsy surgery.

  1. The Impact of Presenting Semantically Related Clusters of New Words on Iranian Intermediate EFL learners' Vocabulary Acquisition

    Directory of Open Access Journals (Sweden)

    Saiede Shiri

    2017-09-01

    Full Text Available Teaching vocabulary in semantically related sets use as a common practice by EFL teachers. The present study tests the effectiveness of this techniques by comparing it with semantically unrelated clusters as the other technique on Iranian intermediate EFL learners. In the study three intact classes of participants studying at Isfahan were presented with a set of unrelated words through “ 504 Absolutely Essential words”, a set of related words through “The Oxford Picture Dictionary “, and the control group were presented some new words through six texts from “Reading Through Interaction”. Comparing of the results indicated that, while both techniques help the learners to acquire new sets of the words, presenting words in semantically unrelated sets seems to be more effective.

  2. The effect of visual and verbal modes of presentation on children's retention of images and words

    Science.gov (United States)

    Vasu, Ellen Storey; Howe, Ann C.

    This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.

  3. The locus of word frequency effects in skilled spelling-to-dictation.

    Science.gov (United States)

    Chua, Shi Min; Liow, Susan J Rickard

    2014-01-01

    In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.

  4. Theology of Jesus’ words from the cross

    Directory of Open Access Journals (Sweden)

    Bogdan Zbroja

    2012-09-01

    Full Text Available The article presents a theological message of the last words that Jesus spoke from the height of the cross. Layout content is conveyed in three kinds of Christ’s relations: the words addressed to God the Father; the words addressed to the good people standing by the cross; the so-called declarations that the Master had spoken to anyone but uttered them in general. All these words speak of the Master’s love. They express His full awareness of what is being done and of His decision voluntarily taken. Above all, it is revealed in the Lord’s statements His obedience to the will of God expressed in the inspired words of the Holy Scriptures. Jesus fulfills all the prophecies of the Old Testament by pronounced words and accomplished works that will become content of the New Testament.

  5. Effects of aversive odour presentation on inhibitory control in the Stroop colour-word interference task.

    Science.gov (United States)

    Finkelmeyer, Andreas; Kellermann, Thilo; Bude, Daniela; Niessen, Thomas; Schwenzer, Michael; Mathiak, Klaus; Reske, Martina

    2010-10-18

    Due to the unique neural projections of the olfactory system, odours have the ability to directly influence affective processes. Furthermore, it has been shown that emotional states can influence various non-emotional cognitive tasks, such as memory and planning. However, the link between emotional and cognitive processes is still not fully understood. The present study used the olfactory pathway to induce a negative emotional state in humans to investigate its effect on inhibitory control performance in a standard, single-trial manual Stroop colour-word interference task. An unpleasant (H2S) and an emotionally neutral (Eugenol) odorant were presented in two separate experimental runs, both in blocks alternating with ambient air, to 25 healthy volunteers, while they performed the cognitive task. Presentation of the unpleasant odorant reduced Stroop interference by reducing the reaction times for incongruent stimuli, while the presentation of the neutral odorant had no effect on task performance. The odour-induced negative emotional state appears to facilitate cognitive processing in the task used in the present study, possibly by increasing the amount of cognitive control that is being exerted. This stands in contrast to other findings that showed impaired cognitive performance under odour-induced negative emotional states, but is consistent with models of mood-congruent processing.

  6. Effects of aversive odour presentation on inhibitory control in the Stroop colour-word interference task

    Directory of Open Access Journals (Sweden)

    Nießen Thomas

    2010-10-01

    Full Text Available Abstract Background Due to the unique neural projections of the olfactory system, odours have the ability to directly influence affective processes. Furthermore, it has been shown that emotional states can influence various non-emotional cognitive tasks, such as memory and planning. However, the link between emotional and cognitive processes is still not fully understood. The present study used the olfactory pathway to induce a negative emotional state in humans to investigate its effect on inhibitory control performance in a standard, single-trial manual Stroop colour-word interference task. An unpleasant (H2S and an emotionally neutral (Eugenol odorant were presented in two separate experimental runs, both in blocks alternating with ambient air, to 25 healthy volunteers, while they performed the cognitive task. Results Presentation of the unpleasant odorant reduced Stroop interference by reducing the reaction times for incongruent stimuli, while the presentation of the neutral odorant had no effect on task performance. Conclusions The odour-induced negative emotional state appears to facilitate cognitive processing in the task used in the present study, possibly by increasing the amount of cognitive control that is being exerted. This stands in contrast to other findings that showed impaired cognitive performance under odour-induced negative emotional states, but is consistent with models of mood-congruent processing.

  7. Stimulus-independent semantic bias misdirects word recognition in older adults.

    Science.gov (United States)

    Rogers, Chad S; Wingfield, Arthur

    2015-07-01

    Older adults' normally adaptive use of semantic context to aid in word recognition can have a negative consequence of causing misrecognitions, especially when the word actually spoken sounds similar to a word that more closely fits the context. Word-pairs were presented to young and older adults, with the second word of the pair masked by multi-talker babble varying in signal-to-noise ratio. Results confirmed older adults' greater tendency to misidentify words based on their semantic context compared to the young adults, and to do so with a higher level of confidence. This age difference was unaffected by differences in the relative level of acoustic masking.

  8. Effects of Rhyme and Spelling Patterns on Auditory Word ERPs Depend on Selective Attention to Phonology

    Science.gov (United States)

    Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.

    2013-01-01

    ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…

  9. Finding words in a language that allows words without vowels.

    Science.gov (United States)

    El Aissati, Abder; McQueen, James M; Cutler, Anne

    2012-07-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the constraint would be counter-productive in certain languages that allow stand-alone vowelless open-class words. One such language is Berber (where t is indeed a word). Berber listeners here detected words affixed to nonsense contexts with or without vowels. Length effects seen in other languages replicated in Berber, but in contrast to prior findings, word detection was not hindered by vowelless contexts. When words can be vowelless, otherwise universal constraints disfavoring vowelless words do not feature in spoken-word recognition. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. "Now We Have Spoken."

    Science.gov (United States)

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  11. Lexical analysis in schizophrenia: how emotion and social word use informs our understanding of clinical presentation.

    Science.gov (United States)

    Minor, Kyle S; Bonfils, Kelsey A; Luther, Lauren; Firmin, Ruth L; Kukla, Marina; MacLain, Victoria R; Buck, Benjamin; Lysaker, Paul H; Salyers, Michelle P

    2015-05-01

    The words people use convey important information about internal states, feelings, and views of the world around them. Lexical analysis is a fast, reliable method of assessing word use that has shown promise for linking speech content, particularly in emotion and social categories, with psychopathological symptoms. However, few studies have utilized lexical analysis instruments to assess speech in schizophrenia. In this exploratory study, we investigated whether positive emotion, negative emotion, and social word use was associated with schizophrenia symptoms, metacognition, and general functioning in a schizophrenia cohort. Forty-six participants generated speech during a semi-structured interview, and word use categories were assessed using a validated lexical analysis measure. Trained research staff completed symptom, metacognition, and functioning ratings using semi-structured interviews. Word use categories significantly predicted all variables of interest, accounting for 28% of the variance in symptoms and 16% of the variance in metacognition and general functioning. Anger words, a subcategory of negative emotion, significantly predicted greater symptoms and lower functioning. Social words significantly predicted greater metacognition. These findings indicate that lexical analysis instruments have the potential to play a vital role in psychosocial assessments of schizophrenia. Future research should replicate these findings and examine the relationship between word use and additional clinical variables across the schizophrenia-spectrum. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    Science.gov (United States)

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  13. The impact of music on learning and consolidation of novel words.

    Science.gov (United States)

    Tamminen, Jakke; Rastle, Kathleen; Darby, Jess; Lucas, Rebecca; Williamson, Victoria J

    2017-01-01

    Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.

  14. Effects of providing word sounds during printed word learning

    NARCIS (Netherlands)

    Reitsma, P.; Dongen, van A.J.N.; Custers, E.

    1984-01-01

    The purpose of this study was to explore the effects of the availability of the spoken sound of words along with the printed forms during reading practice. Firstgrade children from two normal elementary schools practised reading several unfamiliar words in print. For half of the printed words the

  15. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  16. What does že jo (and že ne) mean in spoken dialogue

    Czech Academy of Sciences Publication Activity Database

    Komrsková, Zuzana

    2017-01-01

    Roč. 68, č. 2 (2017), s. 229-237 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : spoken languge * spoken corpus * tag question * responze word Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  17. SPOKEN AYACUCHO QUECHUA, UNITS 11-20.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…

  18. A Grammar of Spoken Brazilian Portuguese.

    Science.gov (United States)

    Thomas, Earl W.

    This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…

  19. The Effect of Number and Presentation Order of High-Constraint Sentences on Second Language Word Learning.

    Science.gov (United States)

    Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo

    2016-01-01

    This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input.

  20. SPOKEN CUZCO QUECHUA, UNITS 7-12.

    Science.gov (United States)

    SOLA, DONALD F.; AND OTHERS

    THIS SECOND VOLUME OF AN INTRODUCTORY COURSE IN SPOKEN CUZCO QUECHUA ALSO COMPRISES ENOUGH MATERIAL FOR ONE INTENSIVE SUMMER SESSION COURSE OR ONE SEMESTER OF SEMI-INTENSIVE INSTRUCTION (120 CLASS HOURS). THE METHOD OF PRESENTATION IS ESSENTIALLY THE SAME AS IN THE FIRST VOLUME WITH FURTHER CONTRASTIVE, LINGUISTIC ANALYSIS OF ENGLISH-QUECHUA…

  1. Towards Affordable Disclosure of Spoken Heritage Archives

    NARCIS (Netherlands)

    Larson, M; Ordelman, Roeland J.F.; Heeren, W.F.L.; Fernie, K; de Jong, Franciska M.G.; Huijbregts, M.A.H.; Oomen, J; Hiemstra, Djoerd

    2009-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken heritage archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, we at least want to

  2. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Modality differences between written and spoken story retelling in healthy older adults

    Directory of Open Access Journals (Sweden)

    Jessica Ann Obermeyer

    2015-04-01

    Methods: Ten native English speaking healthy elderly participants between the ages of 50 and 80 were recruited. Exclusionary criteria included neurological disease/injury, history of learning disability, uncorrected hearing or vision impairment, history of drug/alcohol abuse and presence of cognitive decline (based on Cognitive Linguistic Quick Test. Spoken and written discourse was analyzed for micro linguistic measures including total words, percent correct information units (CIUs; Nicholas & Brookshire, 1993 and percent complete utterances (CUs; Edmonds, et al. 2009. CIUs measure relevant and informative words while CUs focus at the sentence level and measure whether a relevant subject and verb and object (if appropriate are present. Results: Analysis was completed using Wilcoxon Rank Sum Test due to small sample size. Preliminary results revealed that healthy elderly people produced significantly more words in spoken retellings than written retellings (p=.000; however, this measure contrasted with %CIUs and %CUs with participants producing significantly higher %CIUs (p=.000 and %CUs (p=.000 in written story retellings than in spoken story retellings. Conclusion: These findings indicate that written retellings, while shorter, contained higher accuracy at both a word (CIU and sentence (CU level. This observation could be related to the ability to revise written text and therefore make it more concise, whereas the nature of speech results in more embellishment and “thinking out loud,” such as comments about the task, associated observations about the story, etc. We plan to run more participants and conduct a main concepts analysis (before conference time to gain more insight into modality differences and implications.

  4. THE RECOGNITION OF SPOKEN MONO-MORPHEMIC COMPOUNDS IN CHINESE

    Directory of Open Access Journals (Sweden)

    Yu-da Lai

    2012-12-01

    Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.

  5. Uses of the Word “Macula” in Written English, 1400-Present

    Science.gov (United States)

    Schwartz, Stephen G.; Leffler, Christopher T.

    2014-01-01

    We compiled uses of the word “macula” in written English by searching multiple databases, including the Early English Books Online Text Creation Partnership, America’s Historical Newspapers, the Gale Cengage Collections, and others. “Macula” has been used: as a non-medical “spot” or “stain”, literal or figurative, including in astronomy and in Shakespeare; as a medical skin lesion, occasionally with a following descriptive adjective, such as a color (macula alba); as a corneal lesion, including the earliest identified use in English, circa 1400; and to describe the center of the retina. Francesco Buzzi described a yellow color in the posterior pole (“retina tinta di un color giallo”) in 1782, but did not use the word “macula”. “Macula lutea” was published by Samuel Thomas von Sömmering by 1799, and subsequently used in 1818 by James Wardrop, which appears to be the first known use in English. The Google n-gram database shows a marked increase in the frequencies of both “macula” and “macula lutea” following the introduction of the ophthalmoscope in 1850. “Macula” has been used in multiple contexts in written English. Modern databases provide powerful tools to explore historical uses of this word, which may be underappreciated by contemporary ophthalmologists. PMID:24913329

  6. An effective method of collecting practical knowledge by presentation of videos and related words

    Directory of Open Access Journals (Sweden)

    Satoshi Shimada

    2017-12-01

    Full Text Available The concentration of practical knowledge and experiential knowledge in the form of collective intelligence (the wisdom of the crowd is of interest in the area of skill transfer. Previous studies have confirmed that collective intelligence can be formed through the utilization of video annotation systems where knowledge that is recalled while watching videos of work tasks can be assigned in the form of a comment. The knowledge that can be collected is limited, however, to the content that can be depicted in videos, meaning that it is necessary to prepare many videos when collecting knowledge. This paper proposes a method for expanding the scope of recall from the same video through the automatic generation and simultaneous display of related words and video scenes. Further, the validity of the proposed method is empirically illustrated through the example of a field experiment related to mountaineering skills.

  7. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  8. Is spoken Danish less intelligible than Swedish?

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.

    2010-01-01

    The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is

  9. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  10. Word level language identification in online multilingual communication

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Dogruoz, A. Seza

    2013-01-01

    Multilingual speakers switch between languages in online and spoken communication. Analyses of large scale multilingual data require automatic language identification at the word level. For our experiments with multilingual online discussions, we first tag the language of individual words using

  11. Word Memory Test Performance Across Cognitive Domains, Psychiatric Presentations, and Mild Traumatic Brain Injury.

    Science.gov (United States)

    Rowland, Jared A; Miskey, Holly M; Brearly, Timothy W; Martindale, Sarah L; Shura, Robert D

    2017-05-01

    The current study addressed two aims: (i) determine how Word Memory Test (WMT) performance relates to test performance across numerous cognitive domains and (ii) evaluate how current psychiatric disorders or mild traumatic brain injury (mTBI) history affects performance on the WMT after excluding participants with poor symptom validity. Participants were 235 Iraq and Afghanistan-era veterans (Mage = 35.5) who completed a comprehensive neuropsychological battery. Participants were divided into two groups based on WMT performance (Pass = 193, Fail = 42). Tests were grouped into cognitive domains and an average z-score was calculated for each domain. Significant differences were found between those who passed and those who failed the WMT on the memory, attention, executive function, and motor output domain z-scores. WMT failure was associated with a larger performance decrement in the memory domain than the sensation or visuospatial-construction domains. Participants with a current psychiatric diagnosis or mTBI history were significantly more likely to fail the WMT, even after removing participants with poor symptom validity. Results suggest that the WMT is most appropriate for assessing validity in the domains of attention, executive function, motor output and memory, with little relationship to performance in domains of sensation or visuospatial-construction. Comprehensive cognitive batteries would benefit from inclusion of additional performance validity tests in these domains. Additionally, symptom validity did not explain higher rates of WMT failure in individuals with a current psychiatric diagnosis or mTBI history. Further research is needed to better understand how these conditions may affect WMT performance. Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. Word Order Acquisition in Persian Speaking Children

    Directory of Open Access Journals (Sweden)

    Nahid Jalilevand

    2017-06-01

    Discussion: Despite the fact that the spoken-Persian language has no strict word order, Persian-speaking children tend to use other logically possible orders of subject (S, verb (V, and object (O lesser than the SOV structure.

  13. Does segmental overlap help or hurt? Evidence from blocked cyclic naming in spoken and written production.

    Science.gov (United States)

    Breining, Bonnie; Nozari, Nazbanou; Rapp, Brenda

    2016-04-01

    Past research has demonstrated interference effects when words are named in the context of multiple items that share a meaning. This interference has been explained within various incremental learning accounts of word production, which propose that each attempt at mapping semantic features to lexical items induces slight but persistent changes that result in cumulative interference. We examined whether similar interference-generating mechanisms operate during the mapping of lexical items to segments by examining the production of words in the context of others that share segments. Previous research has shown that initial-segment overlap amongst a set of target words produces facilitation, not interference. However, this initial-segment facilitation is likely due to strategic preparation, an external factor that may mask underlying interference. In the present study, we applied a novel manipulation in which the segmental overlap across target items was distributed unpredictably across word positions, in order to reduce strategic response preparation. This manipulation led to interference in both spoken (Exp. 1) and written (Exp. 2) production. We suggest that these findings are consistent with a competitive learning mechanism that applies across stages and modalities of word production.

  14. Vocal reaction times to unilaterally presented concrete and abstract words: towards a theory of differential right hemispheric semantic processing.

    Science.gov (United States)

    Rastatter, M; Dell, C W; McGuire, R A; Loren, C

    1987-03-01

    Previous studies investigating hemispheric organization for processing concrete and abstract nouns have provided conflicting results. Using manual reaction time tasks some studies have shown that the right hemisphere is capable of analyzing concrete words but not abstract. Others, however, have inferred that the left hemisphere is the sole analyzer of both types of lexicon. The present study tested these issues further by measuring vocal reaction times of normal subjects to unilaterally presented concrete and abstract items. Results were consistent with a model of functional localization which suggests that the minor hemisphere is capable of differentially processing both types of lexicon in the presence of a dominant left hemisphere.

  15. Auditory word recognition is not more sensitive to word-initial than to word-final stimulus information

    NARCIS (Netherlands)

    Vlugt, van der M.J.; Nooteboom, S.G.

    1986-01-01

    Several accounts of human recognition of spoken words a.!!llign special importance to stimulus-word onsets. The experiment described here was d~igned to find out whether such a word-beginning superiority effect, which ill supported by experimental evidence of various kinds, is due to a special

  16. Time course of syllabic and sub-syllabic processing in Mandarin word production: Evidence from the picture-word interference paradigm.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2017-06-05

    The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.

  17. The Influence of Topic Status on Written and Spoken Sentence Production

    Science.gov (United States)

    Cowles, H. Wind; Ferreira, Victor S.

    2012-01-01

    Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production. PMID:22408281

  18. Effects of aversive odour presentation on inhibitory control in the Stroop colour-word interference task

    OpenAIRE

    Finkelmeyer, Andreas; Kellermann, Thilo; Bude, Daniela; Nie?en, Thomas; Schwenzer, Michael; Mathiak, Klaus; Reske, Martina

    2010-01-01

    Abstract Background Due to the unique neural projections of the olfactory system, odours have the ability to directly influence affective processes. Furthermore, it has been shown that emotional states can influence various non-emotional cognitive tasks, such as memory and planning. However, the link between emotional and cognitive processes is still not fully understood. The present study used the olfactory pathway to induce a negative emotional state in humans to investigate its effect on i...

  19. Inferring Speaker Affect in Spoken Natural Language Communication

    OpenAIRE

    Pon-Barry, Heather Roberta

    2012-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, ...

  20. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  1. In their own words: how hospitals present corporate restructuring in their annual reports.

    Science.gov (United States)

    Arndt, M; Bigelow, B

    1999-01-01

    Hospitals operate in an environment with strong institutional pressures, in which legitimacy is critical to an organization's access to resources. In such an environment, organizations can increase their legitimacy by engaging in activities or discussing them in a manner that signals that the organization adheres to values held by its costituents. One important symbol of organizational actions or intentions is the formal organizational structure. When hospitals began to adopt a corporate structure in the early eighties, the way in which they presented this decision to the public was as important as the technical merits of the decision itself. This study investigates, through an analysis of annual reports, what hospitals signaled about their adoption of a corporate structure. The findings suggest that through restructuring, hospitals signaled that they were in line with practices advocated in the industry and literature (e.g., adhering to business values, protection of assets, or increasing patient services). By presenting multiple reasons for restructuring, hospitals could signal their attention to the needs of various constituents, and by touching only briefly on each reason, they could ignore the potential conflict between demands such as lower hospital cost and increased services. The findings also suggest that the first hospitals to adopt a corporate structure sought to educate constituents about restructuring by devoting a greater share of their annual report to the topic than later adopters and by enumerating a larger number of anticipated benefits from the structure, which would have enhanced the innovation's legitimacy in the early years.

  2. The employment of a spoken language computer applied to an air traffic control task.

    Science.gov (United States)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  3. From primed concepts to action: A meta-analysis of the behavioral effects of incidentally presented words.

    Science.gov (United States)

    Weingarten, Evan; Chen, Qijia; McAdams, Maxwell; Yi, Jessica; Hepler, Justin; Albarracín, Dolores

    2016-05-01

    A meta-analysis assessed the behavioral impact of and psychological processes associated with presenting words connected to an action or a goal representation. The average and distribution of 352 effect sizes (analyzed using fixed-effects and random-effects models) was obtained from 133 studies (84 reports) in which word primes were incidentally presented to participants, with a nonopposite control group, before measuring a behavioral dependent variable. Findings revealed a small behavioral priming effect (dFE = 0.332, dRE = 0.352), which was robust across methodological procedures and only minimally biased by the publication of positive (vs. negative) results. Theory testing analyses indicated that more valued behavior or goal concepts (e.g., associated with important outcomes or values) were associated with stronger priming effects than were less valued behaviors. Furthermore, there was some evidence of persistence of goal effects over time. These results support the notion that goal activation contributes over and above perception-behavior in explaining priming effects. In summary, theorizing about the role of value and satisfaction in goal activation pointed to stronger effects of a behavior or goal concept on overt action. There was no evidence that expectancy (ease of achieving the goal) moderated priming effects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Words, Words, Words: English, Vocabulary.

    Science.gov (United States)

    Lamb, Barbara

    The Quinmester course on words gives the student the opportunity to increase his proficiency by investigating word origins, word histories, morphology, and phonology. The course includes the following: dictionary skills and familiarity with the "Oxford,""Webster's Third," and "American Heritage" dictionaries; word…

  5. Brain regions activated by the passive processing of visually- and auditorily-presented words measured by averaged PET images of blood flow change

    International Nuclear Information System (INIS)

    Peterson, S.E.; Fox, P.T.; Posner, M.I.; Raichle, M.E.

    1987-01-01

    A limited number of regions specific to input modality are activated by the auditory and visual presentation of single words. These regions include primary auditory and visual cortex, and modality-specific higher order region that may be performing computations at a word level of analysis

  6. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  7. Adapting the Freiburg monosyllabic word test for Slovenian

    Directory of Open Access Journals (Sweden)

    Tatjana Marvin

    2017-12-01

    Full Text Available Speech audiometry is one of the standard methods used to diagnose the type of hearing loss and to assess the communication function of the patient by determining the level of the patient’s ability to understand and repeat words presented to him or her in a hearing test. For this purpose, the Slovenian adaptation of the German tests developed by Hahlbrock (1953, 1960 – the Freiburg Monosyllabic Word Test and the Freiburg Number Test – are used in Slovenia (adapted in 1968 by Pompe. In this paper we focus on the Freiburg Monosyllabic Word Test for Slovenian, which has been criticized by patients as well as in the literature for the unequal difficulty and frequency of the words, with many of these being extremely rare or even obsolete. As part of the patient’s communication function is retrieving the meaning of individual words by guessing, the less frequent and consequently less familiar words do not contribute to reliable testing results. We therefore adapt the test by identifying and removing such words and supplement them with phonetically similar words to preserve the phonetic balance of the list. The words used for replacement are extracted from the written corpus of Slovenian Gigafida and the spoken corpus of Slovenian GOS, while the optimal combinations of words are established by using computational algorithms.

  8. "Daddy, Where Did the Words Go?" How Teachers Can Help Emergent Readers Develop a Concept of Word in Text

    Science.gov (United States)

    Flanigan, Kevin

    2006-01-01

    This article focuses on a concept that has rarely been studied in beginning reading research--a child's concept of word in text. Recent examinations of this phenomenon suggest that a child's ability to match spoken words to written words while reading--a concept of word in text--plays a pivotal role in early reading development. In this article,…

  9. The Study of Synonymous Word "Mistake"

    OpenAIRE

    Suwardi, Albertus

    2016-01-01

    This article discusses the synonymous word "mistake*.The discussion will also cover the meaning of 'word' itself. Words can be considered as form whether spoken or written, or alternatively as composite expression, which combine and meaning. Synonymous are different phonological words which have the same or very similar meanings. The synonyms of mistake are error, fault, blunder, slip, slipup, gaffe and inaccuracy. The data is taken from a computer program. The procedure of data collection is...

  10. On the Usability of Spoken Dialogue Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Bo

     This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...

  11. Examining the relationship between free recall and immediate serial recall: Similar patterns of rehearsal and similar effects of word length, presentation rate, and articulatory suppression.

    Science.gov (United States)

    Bhatarah, Parveen; Ward, Geoff; Smith, Jessica; Hayes, Louise

    2009-07-01

    In five experiments, rehearsal and recall phenomena were examined using the free recall and immediate serial recall (ISR) tasks. In Experiment 1, participants were presented with lists of eight words, were precued or postcued to respond using free recall or ISR, and rehearsed out loud during presentation. The patterns of rehearsal were similar in all the conditions, and there was little difference between recall in the precued and postcued conditions. In Experiment 2, both free recall and ISR were sensitive to word length and presentation rate and showed similar patterns of rehearsal. In Experiment 3, both tasks were sensitive to word length and articulatory suppression. The word length effects generalized to 6-item (Experiment 4) and 12-item (Experiment 5) lists. These findings suggest that the two tasks are underpinned by highly similar rehearsal and recall processes.

  12. Age of acquisition and word frequency in written picture naming.

    Science.gov (United States)

    Bonin, P; Fayol, M; Chalard, M

    2001-05-01

    This study investigates age of acquisition (AoA) and word frequency effects in both spoken and written picture naming. In the first two experiments, reliable AoA effects on object naming speed, with objective word frequency controlled for, were found in both spoken (Experiment 1) and written picture naming (Experiment 2). In contrast, no reliable objective word frequency effects were observed on naming speed, with AoA controlled for, in either spoken (Experiment 3) or written (Experiment 4) picture naming. The implications of the findings for written picture naming are briefly discussed.

  13. When does word frequency influence written production?

    Science.gov (United States)

    Baus, Cristina; Strijkers, Kristof; Costa, Albert

    2013-01-01

    The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  14. When does word frequency influence written production?

    Directory of Open Access Journals (Sweden)

    Cristina eBaus

    2013-12-01

    Full Text Available The aim of the present study was to explore the central (e.g., lexical processing and peripheral processes (motor preparation and execution underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: 1 first keystroke latency and 2 keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analysed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals. The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  15. Word order in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.

    2012-01-01

    The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple

  16. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  17. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  18. Auditory comprehension: from the voice up to the single word level

    OpenAIRE

    Jones, Anna Barbara

    2016-01-01

    Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’...

  19. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  20. Many a true word spoken in jest : Visuele voorstellingspraktyke in ...

    African Journals Online (AJOL)

    Although humour conceals the cultural exclusion in the data set, the cultural codes in the visual material generalise the non-Western 'other' as either extremely religious or as fundamentally different. Key terms: hegemony, Flemish language textbook, Critical Discourse, Analysis, focus group discussion, representational ...

  1. Using Key Part-of-Speech Analysis to Examine Spoken Discourse by Taiwanese EFL Learners

    Science.gov (United States)

    Lin, Yen-Liang

    2015-01-01

    This study reports on a corpus analysis of samples of spoken discourse between a group of British and Taiwanese adolescents, with the aim of exploring the statistically significant differences in the use of grammatical categories between the two groups of participants. The key word method extended to a part-of-speech level using the web-based…

  2. Cohesion as interaction in ELF spoken discourse

    Directory of Open Access Journals (Sweden)

    T. Christiansen

    2013-10-01

    Full Text Available Hitherto, most research into cohesion has concentrated on texts (usually written only in standard Native Speaker English – e.g. Halliday and Hasan (1976. By contrast, following on the work in anaphora of such scholars as Reinhart (1983 and Cornish (1999, Christiansen (2011 describes cohesion as an interac­tive process focusing on the link between text cohesion and discourse coherence. Such a consideration of cohesion from the perspective of discourse (i.e. the process of which text is the product -- Widdowson 1984, p. 100 is especially relevant within a lingua franca context as the issue of different variations of ELF and inter-cultural concerns (Guido 2008 add extra dimensions to the complex multi-code interaction. In this case study, six extracts of transcripts (approximately 1000 words each, taken from the VOICE corpus (2011 of conference question and answer sessions (spoken interaction set in multicultural university con­texts are analysed in depth by means of a qualitative method.

  3. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  4. Using the Corpus of Spoken Afrikaans to generate an Afrikaans ...

    African Journals Online (AJOL)

    This paper presents two chatbot systems, ALICE and. Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the. Corpus of Spoken Afrikaans (Korpus Gesproke Afrikaans) to retrain the ALICE chatbot system with human ...

  5. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  6. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    Science.gov (United States)

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  7. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.J.; Swerts, M.G.J.; Theune, M.; Weegels, M.F.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  8. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  9. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    Science.gov (United States)

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  10. Does Hearing Several Speakers Reduce Foreign Word Learning?

    Science.gov (United States)

    Ludington, Jason Darryl

    2016-01-01

    Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…

  11. Is there pain in champagne? Semantic involvement of words within words during sense-making

    NARCIS (Netherlands)

    van Alphen, P.M.; van Berkum, J.J.A.

    2010-01-01

    In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e.g., pie in pirate) or at their offsets (e.g., pain in champagne). In the experiment, Dutch

  12. Non-intentional but not automatic: reduction of word- and arrow-based compatibility effects by sound distractors in the same categorical domain.

    Science.gov (United States)

    Miles, James D; Proctor, Robert W

    2009-10-01

    In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.

  13. A joint model of word segmentation and meaning acquisition through cross-situational learning.

    Science.gov (United States)

    Räsänen, Okko; Rasilo, Heikki

    2015-10-01

    Human infants learn meanings for spoken words in complex interactions with other people, but the exact learning mechanisms are unknown. Among researchers, a widely studied learning mechanism is called cross-situational learning (XSL). In XSL, word meanings are learned when learners accumulate statistical information between spoken words and co-occurring objects or events, allowing the learner to overcome referential uncertainty after having sufficient experience with individually ambiguous scenarios. Existing models in this area have mainly assumed that the learner is capable of segmenting words from speech before grounding them to their referential meaning, while segmentation itself has been treated relatively independently of the meaning acquisition. In this article, we argue that XSL is not just a mechanism for word-to-meaning mapping, but that it provides strong cues for proto-lexical word segmentation. If a learner directly solves the correspondence problem between continuous speech input and the contextual referents being talked about, segmentation of the input into word-like units emerges as a by-product of the learning. We present a theoretical model for joint acquisition of proto-lexical segments and their meanings without assuming a priori knowledge of the language. We also investigate the behavior of the model using a computational implementation, making use of transition probability-based statistical learning. Results from simulations show that the model is not only capable of replicating behavioral data on word learning in artificial languages, but also shows effective learning of word segments and their meanings from continuous speech. Moreover, when augmented with a simple familiarity preference during learning, the model shows a good fit to human behavioral data in XSL tasks. These results support the idea of simultaneous segmentation and meaning acquisition and show that comprehensive models of early word segmentation should take referential word

  14. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  15. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  16. Voice congruency facilitates word recognition.

    Directory of Open Access Journals (Sweden)

    Sandra Campeanu

    Full Text Available Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  17. Voice congruency facilitates word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2013-01-01

    Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  18. Social interaction facilitates word learning in preverbal infants: Word-object mapping and word segmentation.

    Science.gov (United States)

    Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo

    2017-08-01

    In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. CROATIAN ADULT SPOKEN LANGUAGE CORPUS (HrAL

    Directory of Open Access Journals (Sweden)

    Jelena Kuvač Kraljević

    2016-01-01

    Full Text Available Interest in spoken-language corpora has increased over the past two decades leading to the development of new corpora and the discovery of new facets of spoken language. These types of corpora represent the most comprehensive data source about the language of ordinary speakers. Such corpora are based on spontaneous, unscripted speech defined by a variety of styles, registers and dialects. The aim of this paper is to present the Croatian Adult Spoken Language Corpus (HrAL, its structure and its possible applications in different linguistic subfields. HrAL was built by sampling spontaneous conversations among 617 speakers from all Croatian counties, and it comprises more than 250,000 tokens and more than 100,000 types. Data were collected during three time slots: from 2010 to 2012, from 2014 to 2015 and during 2016. HrAL is today available within TalkBank, a large database of spoken-language corpora covering different languages (https://talkbank.org, in the Conversational Analyses corpora within the subsection titled Conversational Banks. Data were transcribed, coded and segmented using the transcription format Codes for Human Analysis of Transcripts (CHAT and the Computerised Language Analysis (CLAN suite of programmes within the TalkBank toolkit. Speech streams were segmented into communication units (C-units based on syntactic criteria. Most transcripts were linked to their source audios. The TalkBank is public free, i.e. all data stored in it can be shared by the wider community in accordance with the basic rules of the TalkBank. HrAL provides information about spoken grammar and lexicon, discourse skills, error production and productivity in general. It may be useful for sociolinguistic research and studies of synchronic language changes in Croatian.

  20. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  1. Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants.

    Science.gov (United States)

    Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur

    The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a

  2. Some words on Word

    NARCIS (Netherlands)

    Janssen, Maarten; Visser, A.

    In many disciplines, the notion of a word is of central importance. For instance, morphology studies le mot comme tel, pris isol´ement (Mel’ˇcuk, 1993 [74]). In the philosophy of language the word was often considered to be the primary bearer of meaning. Lexicography has as its fundamental role

  3. A Descriptive Study of Registers Found in Spoken and Written Communication (A Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Nurul Hidayah

    2016-07-01

    Full Text Available This research is descriptive study of registers found in spoken and written communication. The type of this research is Descriptive Qualitative Research. In this research, the data of the study is register in spoken and written communication that are found in a book entitled "Communicating! Theory and Practice" and from internet. The data can be in the forms of words, phrases and abbreviation. In relation with method of collection data, the writer uses the library method as her instrument. The writer relates it to the study of register in spoken and written communication. The technique of analyzing the data using descriptive method. The types of register in this term will be separated into formal register and informal register, and identify the meaning of register.

  4. Spoken grammar awareness raising: Does it affect the listening ability of Iranian EFL learners?

    Directory of Open Access Journals (Sweden)

    Mojgan Rashtchi

    2011-12-01

    Full Text Available Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and lack core spoken language features. The aim of the present study was to explore the question whether awareness of spoken grammar features could affect learners’ comprehension of real-life conversations. To this end, 45 university students in two intact classes participated in a listening course employing corpus-based materials. The instruction of the spoken grammar features to the experimental group was done overtly through awareness raising tasks, whereas the control group, though exposed to the same materials, was not provided with such tasks for learning the features. The results of the independent samples t tests revealed that the learners in the experimental group comprehended everyday conversations much better than those in the control group. Additionally, the highly positive views of spoken grammar held by the learners, which was elicited by means of a retrospective questionnaire, were generally comparable to those reported in the literature.

  5. Phonological and Semantic Knowledge Are Causal Influences on Learning to Read Words in Chinese

    Science.gov (United States)

    Zhou, Lulin; Duff, Fiona J.; Hulme, Charles

    2015-01-01

    We report a training study that assesses whether teaching the pronunciation and meaning of spoken words improves Chinese children's subsequent attempts to learn to read the words. Teaching the pronunciations of words helps children to learn to read those same words, and teaching the pronunciations and meanings improves learning still further.…

  6. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    Science.gov (United States)

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  7. Words Get in the Way: Linguistic Effects on Talker Discrimination.

    Science.gov (United States)

    Narayan, Chandan R; Mak, Lorinda; Bialystok, Ellen

    2017-07-01

    A speech perception experiment provides evidence that the linguistic relationship between words affects the discrimination of their talkers. Listeners discriminated two talkers' voices with various linguistic relationships between their spoken words. Listeners were asked whether two words were spoken by the same person or not. Word pairs varied with respect to the linguistic relationship between the component words, forming either: phonological rhymes, lexical compounds, reversed compounds, or unrelated pairs. The degree of linguistic relationship between the words affected talker discrimination in a graded fashion, revealing biases listeners have regarding the nature of words and the talkers that speak them. These results indicate that listeners expect a talker's words to be linguistically related, and more generally, indexical processing is affected by linguistic information in a top-down fashion even when listeners are not told to attend to it. Copyright © 2016 Cognitive Science Society, Inc.

  8. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  9. Basic speech recognition for spoken dialogues

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-09-01

    Full Text Available Spoken dialogue systems (SDSs) have great potential for information access in the developing world. However, the realisation of that potential requires the solution of several challenging problems, including the development of sufficiently accurate...

  10. The effect of post-learning presentation of music on long-term word-list retention.

    Science.gov (United States)

    Judde, Sarah; Rickard, Nikki

    2010-07-01

    Memory consolidation processes occur slowly over time, allowing recently formed memories to be altered soon after acquisition. Although post-learning arousal treatments have been found to modulate memory consolidation, examination of the temporal parameters of these effects in humans has been limited. In the current study, 127 participants learned a neutral word list and were exposed to either a positively or negatively arousing musical piece following delays of 0, 20 or 45min. One-week later, participants completed a long-term memory recognition test, followed by Carver and White's (1994) approach/avoidance personality scales. Retention was significantly enhanced, regardless of valence, when the emotion manipulation occurred at 20min, but not immediately or 45min, post-learning. Further, the 20min interval effect was found to be moderated by high 'drive' approach sensitivity. The selective facilitatory conditions of music identified in the current study (timing and personality) offer valuable insights for future development of more specified memory intervention strategies.

  11. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  13. Do handwritten words magnify lexical effects in visual word recognition?

    Science.gov (United States)

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  14. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  15. Variation Patterns in Across-Word Regressive Assimilation in Picard: An Optimality Theoretic Account.

    Science.gov (United States)

    Cardoso, Walcir

    2001-01-01

    Offers an optimality theoretic account for the phonological process of across-word regressive assimilation (AWRA) in Picard, a Gallo-Romance dialect spoken in the Picardie region in Northern France and Southern Belgium. Focuses on the varieties spoken in the Vimeu region of France. Examines one particular topic in the analysis of AWRA: the…

  16. TEACHING TURKISH AS SPOKEN IN TURKEY TO TURKIC SPEAKERS - TÜRK DİLLİLERE TÜRKİYE TÜRKÇESİ ÖĞRETİMİ NASIL OLMALIDIR?

    Directory of Open Access Journals (Sweden)

    Ali TAŞTEKİN

    2015-12-01

    sentence, which is the basic component that states a meaningful idea, should be taken as basis while teaching the language; and for the ones who do not know the Turkish alphabet, firstly letters, which are the smallest units of written language, should be comprehended by means of alphabets with pictures. Instead of making students memorize words, the meanings and forms of the words should be presented within the sentences while tenses and personal pronouns and endings should be comprehended within the sentence. While teaching sentences; first, correct pronunciation of the sentence, the meaning and correct articulation (first oral and then written of the sentence, and then how to analyse the sentence structure and produce similar sentences should be taught; stress, intonation and spelling rules should be comprehended by means of sentences. Teaching Turkish as spoken in Turkey to Turkic speakers and Teaching Turkish to foreigners are still considered the same. A state policy has to be formed on this matter; the scope, content and teaching method of teaching Turkish to non-native speakers must be evaluated based on scientific criteria and measures must be taken in terms of professional Turkish language teaching. In a word, new measures must be taken and scientific studies must be conducted for the activity of Teaching Turkish as Spoken in Turkey to Turkic Speakers in terms of fields varying from raising course teachers, producing course books, determining the method and location of the class to the production and selection of educational tools.

  17. P2-13: Location word Cues' Effect on Location Discrimination Task: Cross-Modal Study

    Directory of Open Access Journals (Sweden)

    Satoko Ohtsuka

    2012-10-01

    Full Text Available As is well known, participants are slower and make more errors in responding to the display color of an incongruent color word than a congruent one. This traditional stroop effect is often accounted for with relatively automatic and dominant word processing. Although the word dominance account has been widely supported, it is not clear in what extent of perceptual tasks it is valid. Here we aimed to examine whether the word dominance effect is observed in location stroop tasks and in audio-visual situations. The participants were required to press a key according to the location of visual (Experiment 1 and audio (Experiment 2 targets, left or right, as soon as possible. A cue of written (Experiments 1a and 2a or spoken (Experiments 1b and 2b location words, “left” or “right”, was presented on the left or right side of the fixation with cue lead times (CLT of 200 ms and 1200 ms. Reaction time from target presentation to key press was recorded as a dependent variable. The results were that the location validity effect was marked in within-modality but less so in cross-modality trials. The word validity effect was strong in within- but not in cross-modality trials. The CLT gave some effect of inhibition of return. So the word dominance could be less effective in location tasks and in cross-modal situations. The spatial correspondence seems to overcome the word effect.

  18. Different neurophysiological mechanisms underlying word and rule extraction from speech.

    Directory of Open Access Journals (Sweden)

    Ruth De Diego Balaguer

    Full Text Available The initial process of identifying words from spoken language and the detection of more subtle regularities underlying their structure are mandatory processes for language acquisition. Little is known about the cognitive mechanisms that allow us to extract these two types of information and their specific time-course of acquisition following initial contact with a new language. We report time-related electrophysiological changes that occurred while participants learned an artificial language. These changes strongly correlated with the discovery of the structural rules embedded in the words. These changes were clearly different from those related to word learning and occurred during the first minutes of exposition. There is a functional distinction in the nature of the electrophysiological signals during acquisition: an increase in negativity (N400 in the central electrodes is related to word-learning and development of a frontal positivity (P2 is related to rule-learning. In addition, the results of an online implicit and a post-learning test indicate that, once the rules of the language have been acquired, new words following the rule are processed as words of the language. By contrast, new words violating the rule induce syntax-related electrophysiological responses when inserted online in the stream (an early frontal negativity followed by a late posterior positivity and clear lexical effects when presented in isolation (N400 modulation. The present study provides direct evidence suggesting that the mechanisms to extract words and structural dependencies from continuous speech are functionally segregated. When these mechanisms are engaged, the electrophysiological marker associated with rule-learning appears very quickly, during the earliest phases of exposition to a new language.

  19. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  20. Signal Words

    Science.gov (United States)

    SIGNAL WORDS TOPIC FACT SHEET NPIC fact sheets are designed to answer questions that are commonly asked by the ... making decisions about pesticide use. What are Signal Words? Signal words are found on pesticide product labels, ...

  1. Affective Congruence between Sound and Meaning of Words Facilitates Semantic Decision.

    Science.gov (United States)

    Aryani, Arash; Jacobs, Arthur M

    2018-05-31

    A similarity between the form and meaning of a word (i.e., iconicity) may help language users to more readily access its meaning through direct form-meaning mapping. Previous work has supported this view by providing empirical evidence for this facilitatory effect in sign language, as well as for onomatopoetic words (e.g., cuckoo) and ideophones (e.g., zigzag). Thus, it remains largely unknown whether the beneficial role of iconicity in making semantic decisions can be considered a general feature in spoken language applying also to "ordinary" words in the lexicon. By capitalizing on the affective domain, and in particular arousal, we organized words in two distinctive groups of iconic vs. non-iconic based on the congruence vs. incongruence of their lexical (meaning) and sublexical (sound) arousal. In a two-alternative forced choice task, we asked participants to evaluate the arousal of printed words that were lexically either high or low arousing. In line with our hypothesis, iconic words were evaluated more quickly and more accurately than their non-iconic counterparts. These results indicate a processing advantage for iconic words, suggesting that language users are sensitive to sound-meaning mappings even when words are presented visually and read silently.

  2. Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults

    Science.gov (United States)

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…

  3. The Penefit of Salience: Salient Accented, but Not Unaccented Words Reveal Accent Adaptation Effects.

    Science.gov (United States)

    Grohe, Ann-Kathrin; Weber, Andrea

    2016-01-01

    In two eye-tracking experiments, the effects of salience in accent training and speech accentedness on spoken-word recognition were investigated. Salience was expected to increase a stimulus' prominence and therefore promote learning. A training-test paradigm was used on native German participants utilizing an artificial German accent. Salience was elicited by two different criteria: production and listening training as a subjective criterion and accented (Experiment 1) and canonical test words (Experiment 2) as an objective criterion. During training in Experiment 1, participants either read single German words out loud and deliberately devoiced initial voiced stop consonants (e.g., Balken-"beam" pronounced as (*) Palken), or they listened to pre-recorded words with the same accent. In a subsequent eye-tracking experiment, looks to auditorily presented target words with the accent were analyzed. Participants from both training conditions fixated accented target words more often than a control group without training. Training was identical in Experiment 2, but during test, canonical German words that overlapped in onset with the accented words from training were presented as target words (e.g., Palme-"palm tree" overlapped in onset with the training word (*) Palken) rather than accented words. This time, no training effect was observed; recognition of canonical word forms was not affected by having learned the accent. Therefore, accent learning was only visible when the accented test tokens in Experiment 1, which were not included in the test of Experiment 2, possessed sufficient salience based on the objective criterion "accent." These effects were not modified by the subjective criterion of salience from the training modality.

  4. Paroles de Soleil Presentation of Words of the Sun. A tentative classification of sundial mottoes according to their intrinsic meaning.

    Directory of Open Access Journals (Sweden)

    Olivier Escuder

    2009-07-01

    νωμων (gnomon meaning the ultimately competent judge. The interpretation of mottoes and other inscriptions on sundials may facilitate the understanding of the notion of the passing of time, its possible conscious and unconscious, individual and collective interpretations, and the relationships that Mankind has established with it. France is privileged to have on its territory some very ancient sundials, consequently an analysis of the available data can reveal how those relations have changed over a period of centuries or even decades. Since 1998, a special committee of the French Society for Astronomy’s Sundial Commission (Paris, France has been analyzing sundials and their mottoes. For the past seven years, the group has worked on a classification system founded on 12 categories based on their intrinsic meaning (for example : religious, philosophical, optimistic mottoes,.... After the analysis of approximately 3,000 mottoes, the system revealed several trends of particular meanings through time, and the results were published under the title Paroles de Soleil – Devises des cadrans solaires de France. This publication is now available to the public. It presents and explains 2,159 mottoes found on French sundials.

  5. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  6. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  7. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    window focused over the part which most likely contains an answer to the query. The two systems are integrated into a full spoken query answering system. The prototype can answer queries and questions within the chosen football (soccer) test domain, but the system has the flexibility for being ported...

  8. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    Science.gov (United States)

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  9. SPOKEN AYACUCHO QUECHUA. UNITS 1-10.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THIS BEGINNING COURSE IN AYACUCHO QUECHUA, SPOKEN BY ABOUT A MILLION PEOPLE IN SOUTH-CENTRAL PERU, WAS PREPARED TO INTRODUCE THE PHONOLOGY AND GRAMMAR OF THIS DIALECT TO SPEAKERS OF ENGLISH. THE FIRST OF TWO VOLUMES, IT SERVES AS A TEXT FOR A 6-WEEK INTENSIVE COURSE OF 20 CLASS HOURS A WEEK. THE AUTHORS COMPARE AND CONTRAST SIGNIFICANT FEATURES OF…

  10. Mapping Students' Spoken Conceptions of Equality

    Science.gov (United States)

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  11. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  12. Business Spoken English Learning Strategies for Chinese Enterprise Staff

    Institute of Scientific and Technical Information of China (English)

    Han Li

    2013-01-01

    This study addresses the issue of promoting effective Business Spoken English of Enterprise Staff in China.It aims to assess the assessment of spoken English learning methods and identify the difficulties of learning English oral expression concerned business area.It also provides strategies for enhancing Enterprise Staff’s level of Business Spoken English.

  13. Presentations

    International Nuclear Information System (INIS)

    2007-01-01

    The presented materials consist of presentations of international workshop which held in Warsaw from 4 to 5 October 2007. Main subject of the meeting was progress in manufacturing as well as research program development for neutron detector which is planned to be placed at GANIL laboratory and will be used in nuclear spectroscopy research

  14. Early language development in children with profound hearing loss fitted with a device at a young age: part I--the time period taken to acquire first words and first word combinations.

    Science.gov (United States)

    Nott, Pauline; Cowan, Robert; Brown, P Margaret; Wigglesworth, Gillian

    2009-10-01

    Increasing numbers of infants and young children are now presenting to implantation centers and early intervention programs as the impact of universal newborn hearing screening programs is felt worldwide. Although results of a number of studies have highlighted the benefit of early identification and early fitting of hearing devices, there is relatively little research on the impact of early fitting of these devices on first language milestones. The aim of this study was to investigate the early spoken language milestones of young children with hearing loss (HL) from two perspectives: first, the acquisition of the first lexicon (i.e., the first 100 words) and second, the emergence of the first word combinations. Two groups of participants, one comprising 24 participants with profound HL and a second comprising 16 participants with normal hearing, were compared. Twenty-three participants in the HL group were fitted with a cochlear implant and one with bilateral hearing aids. All of these were "switched-on" or fitted before 30 months of age and half at words and any word combinations produced while reaching this single-word target. Acquisition of single words was compared by using the time period (in days) taken to reach several single-word targets (e.g., 50 words, 100 words) from the date of production of the first word. The emergence of word combinations was analyzed from two perspectives: first, the time (in days) from the date of production of the first word to the emergence of the first word combinations and second, the size of the single-word lexicon when word combinations emerged. The normal-hearing group required a significantly shorter time period to acquire the first 50 (mean words than the HL group. Although both groups demonstrated acceleration in lexical acquisition, the hearing group took significantly fewer days to reach the second 50 words relative to the first 50 words than did the HL group. Finally, the hearing group produced word combinations

  15. Acoustic Masking Disrupts Time-Dependent Mechanisms of Memory Encoding in Word-List Recall

    Science.gov (United States)

    Cousins, Katheryn A.Q.; Dar, Jonathan; Wingfield, Arthur; Miller, Paul

    2013-01-01

    Recall of recently heard words is affected by the clarity of presentation: even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply “recognized” versus “not-recognized”. More surprising is that when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the Linking by Active Maintenance Model (LAMM). This computational model of perception and encoding predicts that these effects are time dependent. Here we challenge our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We find that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrate that they can be accounted for by LAMM. PMID:24838269

  16. Presentations

    International Nuclear Information System (INIS)

    2007-01-01

    The PARIS meeting held in Cracow, Poland from 14 to 15 May 2007. The main subjects discussed during this meeting were the status of international project dedicated to gamma spectroscopy research. The scientific research program includes investigations of giant dipole resonance, probe of hot nuclei induced in heavy reactions, Jacobi shape transitions, isospin mixing and nuclear multifragmentation. The mentioned programme needs Rand D development such as new scintillations materials as lanthanum chlorides and bromides as well as new photo detection sensors as avalanche photodiodes - such subjects are also subjects of discussion. Additionally results of computerized simulations of scintillation detectors properties by means of GEANT- 4 code are presented

  17. A real-time spoken-language system for interactive problem-solving, combining linguistic and statistical technology for improved spoken language understanding

    Science.gov (United States)

    Moore, Robert C.; Cohen, Michael H.

    1993-09-01

    Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.

  18. Tracking the time course of word-frequency effects in auditory word recognition with event-related potentials.

    Science.gov (United States)

    Dufour, Sophie; Brunellière, Angèle; Frauenfelder, Ulrich H

    2013-04-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to reflect mechanisms involved in word identification, was also examined. The ERP data showed a clear frequency effect as early as 350 ms from word onset on the P350, followed by a later effect at word offset on the late N400. A neighborhood density effect was also found at an early stage of spoken-word processing on the PMN, and at word offset on the late N400. Overall, our ERP differences for word frequency suggest that frequency affects the core processes of word identification starting from the initial phase of lexical activation and including target word selection. They thus rule out any interpretation of the word frequency effect that is limited to a purely decisional locus after word identification has been completed. Copyright © 2012 Cognitive Science Society, Inc.

  19. Individual language experience modulates rapid formation of cortical memory circuits for novel words

    Science.gov (United States)

    Kimppa, Lilli; Kujala, Teija; Shtyrov, Yury

    2016-01-01

    Mastering multiple languages is an increasingly important ability in the modern world; furthermore, multilingualism may affect human learning abilities. Here, we test how the brain’s capacity to rapidly form new representations for spoken words is affected by prior individual experience in non-native language acquisition. Formation of new word memory traces is reflected in a neurophysiological response increase during a short exposure to novel lexicon. Therefore, we recorded changes in electrophysiological responses to phonologically native and non-native novel word-forms during a perceptual learning session, in which novel stimuli were repetitively presented to healthy adults in either ignore or attend conditions. We found that larger number of previously acquired languages and earlier average age of acquisition (AoA) predicted greater response increase to novel non-native word-forms. This suggests that early and extensive language experience is associated with greater neural flexibility for acquiring novel words with unfamiliar phonology. Conversely, later AoA was associated with a stronger response increase for phonologically native novel word-forms, indicating better tuning of neural linguistic circuits to native phonology. The results suggest that individual language experience has a strong effect on the neural mechanisms of word learning, and that it interacts with the phonological familiarity of the novel lexicon. PMID:27444206

  20. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  1. Presentation

    Directory of Open Access Journals (Sweden)

    Eduardo Vicente

    2013-06-01

    Full Text Available In the present edition of Significação – Scientific Journal for Audiovisual Culture and in the others to follow something new is brought: the presence of thematic dossiers which are to be organized by invited scholars. The appointed subject for the very first one of them was Radio and the invited scholar, Eduardo Vicente, professor at the Graduate Course in Audiovisual and at the Postgraduate Program in Audiovisual Media and Processes of the School of Communication and Arts of the University of São Paulo (ECA-USP. Entitled Radio Beyond Borders the dossier gathers six articles and the intention of reuniting works on the perspectives of usage of such media as much as on the new possibilities of aesthetical experimenting being build up for it, especially considering the new digital technologies and technological convergences. It also intends to present works with original theoretical approach and original reflections able to reset the way we look at what is today already a centennial media. Having broadened the meaning of “beyond borders”, four foreign authors were invited to join the dossier. This is the first time they are being published in this country and so, in all cases, the articles where either written or translated into Portuguese.The dossier begins with “Radio is dead…Long live to the sound”, which is the transcription of a thought provoking lecture given by Armand Balsebre (Autonomous University of Barcelona – one of the most influential authors in the world on the Radio study field. It addresses the challenges such media is to face so that it can become “a new sound media, in the context of a new soundscape or sound-sphere, for the new listeners”. Andrew Dubber (Birmingham City University regarding the challenges posed by a Digital Era argues for a theoretical approach in radio studies which can consider a Media Ecology. The author understands the form and discourse of radio as a negotiation of affordances and

  2. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  3. SPOKEN-LANGUAGE FEATURES IN CASUAL CONVERSATION A Case of EFL Learners‘ Casual Conversation

    Directory of Open Access Journals (Sweden)

    Aris Novi

    2017-12-01

    Full Text Available Spoken text differs from written one in its features of context dependency, turn-taking organization, and dynamic structure. EFL learners; however, sometime find it difficult to produce typical characteristics of spoken language, particularly in casual talk. When they are asked to conduct a conversation, some of them tend to be script-based which is considered unnatural. Using the theory of Thornburry (2005, this paper aims to analyze characteristics of spoken language in casual conversation which cover spontaneity, interactivity, interpersonality, and coherence. This study used discourse analysis to reveal four features in turns and moves of three casual conversations. The findings indicate that not all sub-features used in the conversation. In this case, the spontaneity features were used 132 times; the interactivity features were used 1081 times; the interpersonality features were used 257 times; while the coherence features (negotiation features were used 526 times. Besides, the results also present that some participants seem to dominantly produce some sub-features naturally and vice versa. Therefore, this finding is expected to be beneficial to provide a model of how spoken interaction should be carried out. More importantly, it could raise English teachers or lecturers‘ awareness in teaching features of spoken language, so that, the students could develop their communicative competence as the native speakers of English do.

  4. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    Science.gov (United States)

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  5. Correlations between vocabulary and phonological acquisition: number of words produced versus acquired consonants.

    Science.gov (United States)

    Wiethan, Fernanda Marafiga; Mota, Helena Bolli; Moraes, Anaelena Bragança de

    2016-01-01

    To verify the probable correlations between the number of word types and the number of consonants in the general phonological system in children with typical language development. Study participants were 186 children aged one year and six months to five years, 11 months and 29 days who were monolingual Brazilian Portuguese speakers with typical language development. Data collection involved speech, language and hearing assessments and spontaneous speech recordings. Phonology was assessed with regard to the number of acquired consonants in the general phonological system, in each syllable structure and in Implicational Model of Feature Complexity (IMFC) levels. Vocabulary was assessed with regard to number of word types produced. These data were compared across age groups. After that, correlations between the word types produced and the variables established for the phonological system were analyzed. The significance level adopted was 5%. All phonological aspects evaluated presented gradual growth. Word types produced showed a similar behavior, though with a small regression at the age of five years. Different positive correlations occurred between the spoken word types and the variables analyzed in the phonological system. Only one negative correlation occurred with respect to the production of complex onset in the last age group analyzed. The phonology and vocabulary of the study participants present similar behaviors. There are many positive correlations between the word types produced and the different aspects of phonology, except regarding complex onset.

  6. Activation of words with phonological overlap

    Directory of Open Access Journals (Sweden)

    Claudia K. Friedrich

    2013-08-01

    Full Text Available Multiple lexical representations overlapping with the input (cohort neighbors are temporarily activated in the listener’s mental lexicon when speech unfolds in time. Activation for cohort neighbors appears to rapidly decline as soon as there is mismatch with the input. However, it is a matter of debate whether or not they are completely excluded from further processing. We recorded behavioral data and event-related brain potentials (ERPs in auditory-visual word onset priming during a lexical decision task. As primes we used the first two syllables of spoken German words. In a carrier word condition, the primes were extracted from spoken versions of the target words (ano-ANORAK 'anorak'. In a cohort neighbor condition, the primes were taken from words that overlap with the target word up to the second nucleus (ana- taken from ANANAS 'pineapple'. Relative to a control condition, where primes and targets were unrelated, lexical decision responses for cohort neighbors were delayed. This reveals that cohort neighbors are disfavored by the decision processes at the behavioral front end. In contrast, left-anterior ERPs reflected long-lasting facilitated processing of cohort neighbors. We interpret these results as evidence for extended parallel processing of cohort neighbors. That is, in parallel to the preparation and elicitation of delayed lexical decision responses to cohort neighbors, aspects of the processing system appear to keep track of those less efficient candidates.

  7. A Positivity Bias in Written and Spoken English and Its Moderation by Personality and Gender.

    Science.gov (United States)

    Augustine, Adam A; Mehl, Matthias R; Larsen, Randy J

    2011-09-01

    The human tendency to use positive words ("adorable") more often than negative words ("dreadful") is called the linguistic positivity bias. We find evidence for this bias in two studies of word use, one based on written corpora and another based on naturalistic speech samples. In addition, we demonstrate that the positivity bias applies to nouns and verbs as well as adjectives. We also show that it is found to the same degree in written as well as spoken English. Moreover, personality traits and gender moderate the effect, such that persons high on extraversion and agreeableness and women display a larger positivity bias in naturalistic speech. Results are discussed in terms of how the linguistic positivity bias may serve as a mechanism for social facilitation. People, in general, and some people more than others, tend to talk about the brighter side of life.

  8. ROMANCE LOAN WORDS IN HRELJIĆ BEDROOM

    Directory of Open Access Journals (Sweden)

    Lina Pliško

    2016-01-01

    Full Text Available In this paper we present the immediate etymology (etymologia proxima of twenty words of Romance origin belonging to the semantic field of furniture (5, bed parts (4, bed linen (5 and decorations and certain objects found (6 in the bedroom. The words have been obtained through field work in the Hreljići area, and attestations of these words were sought in dictionaries of the speech of the north Adriatic (Boljun, Grobnik, Labin, Medulin, Roveria dialects as well as the south Adriatic, primarily island, regions (Ugljan, Pag, Brač, Hvar. On the basis of the analysis of all the words obtained, it may be concluded that only two words from the questionnaire are of Slavic origin: postelja and punjava, and that, according to the immediate etymology, twenty words are of Istro-Venetian origin, i.e. from the Istrian variants of the Venetian dialect, which has been spoken in the region of Istria for centuries. This idiom is still spoken by many today, although it no longer serves as a lingua franca among the several ethnic and language groups living in the area as it once did: nowadays its role has been taken over by the standard Croatian language. By comparing the words obtained from Hreljići with those from other Čakavian dialects in Istria (in Medulin, Labin, Boljun and Roverian dialects, in Grobnik, as well as those from the southern Adriatic islands (Novlja on the island of Pag, Kukljica on the island of Ugljan, Brač, as well as Pitava and Zavala on Hvar, we have concluded that many words are used and have been preserved in the same form and with the same meanings that can be found in the dialect of Hreljići. In all the dictionaries we have consulted, nine words and their variants corresponding to those in Hreljići have been attested: armar/armarun/ormarun, lampadina/lampa, koltrina, šusta/šušta, kučeta/kočeta, štramac, lancun, kušin, intima/intimela. Two attested words of Venetian origin have only been found in certain Istrian idioms:

  9. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  10. Word order and information structure in Makhuwa-Enahara

    NARCIS (Netherlands)

    Wal, Guenever Johanna van der

    2009-01-01

    This thesis investigates the grammar of Makhuwa-Enahara, a Bantu language spoken in the north of Mozambique. The information structure is an influential factor in this language, determining the word order and the use of special conjugations known as conjoint and disjoint verb forms. The thesis

  11. Word order variation and foregrounding of complement clauses

    DEFF Research Database (Denmark)

    Christensen, Tanya Karoli; Jensen, Torben Juel

    2015-01-01

    Through mixed models analyses of complement clauses in a corpus of spoken Danish we examine the role of sentence adverbials in relation to a word order distinction in Scandinavian signalled by the relative position of sentence adverbials and finite verb (V>Adv vs. Adv>V). The type of sentence...

  12. Evaluating spoken dialogue systems according to de-facto standards: A case study

    NARCIS (Netherlands)

    Möller, S.; Smeele, P.; Boland, H.; Krebber, J.

    2007-01-01

    In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During

  13. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  14. Using the TED Talks to Evaluate Spoken Post-editing of Machine Translation

    DEFF Research Database (Denmark)

    Liyanapathirana, Jeevanthi; Popescu-Belis, Andrei

    2016-01-01

    This paper presents a solution to evaluate spoken post-editing of imperfect machine translation output by a human translator. We compare two approaches to the combination of machine translation (MT) and automatic speech recognition (ASR): a heuristic algorithm and a machine learning method...

  15. Selectivity of lexical-semantic disorders in Polish-speaking patients with aphasia: evidence from single-word comprehension.

    Science.gov (United States)

    Jodzio, Krzysztof; Biechowska, Daria; Leszniewska-Jodzio, Barbara

    2008-09-01

    Several neuropsychological studies have shown that patients with brain damage may demonstrate selective category-specific deficits of auditory comprehension. The present paper reports on an investigation of aphasic patients' preserved ability to perform a semantic task on spoken words despite severe impairment in auditory comprehension, as shown by failure in matching spoken words to pictured objects. Twenty-six aphasic patients (11 women and 15 men) with impaired speech comprehension due to a left-hemisphere ischaemic stroke were examined; all were right-handed and native speakers of Polish. Six narrowly defined semantic categories for which dissociations have been reported are colors, body parts, animals, food, objects (mostly tools), and means of transportation. An analysis using one-way ANOVA with repeated measures in conjunction with the Lambda-Wilks Test revealed significant discrepancies among these categories in aphasic patients, who had much more difficulty comprehending names of colors than they did comprehending names of other objects (F((5,21))=13.15; pexplanation in terms of word frequency and/or visual complexity was ruled out. Evidence from the present study support the position that so called "global" aphasia is an imprecise term and should be redefined. These results are discussed within the connectionist and modular perspectives on category-specific deficits in aphasia.

  16. Word-length effect in verbal short-term memory in individuals with Down's syndrome.

    Science.gov (United States)

    Kanno, K; Ikeda, Y

    2002-11-01

    Many studies have indicated that individuals with Down's syndrome (DS) show a specific deficit in short-term memory for verbal information. The aim of the present study was to investigate the influence of the length of words on verbal short-term memory in individuals with DS. Twenty-eight children with DS and 10 control participants matched for memory span were tested on verbal serial recall and speech rate, which are thought to involve rehearsal and output speed. Although a significant word-length effect was observed in both groups for the recall of a larger number of items with a shorter spoken duration than for those with a longer spoken duration, the number of correct recalls in the group with DS was reduced compared to the control subjects. The results demonstrating poor short-term memory in children with DS were irrelevant to speech rate. In addition, the proportion of repetition-gained errors in serial recall was higher in children with DS than in control subjects. The present findings suggest that poor access to long-term lexical knowledge, rather than overt articulation speed, constrains verbal short-term memory functions in individuals with DS.

  17. Speech Perception Engages a General Timer: Evidence from a Divided Attention Word Identification Task

    Science.gov (United States)

    Casini, Laurence; Burle, Boris; Nguyen, Noel

    2009-01-01

    Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…

  18. The visual-auditory color-word Stroop asymmetry and its time course

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2005-01-01

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect.

  19. Combinatorics on words Christoffel words and repetitions in words

    CERN Document Server

    Berstel, Jean; Reutenauer, Christophe; Saliola, Franco V

    2008-01-01

    The two parts of this text are based on two series of lectures delivered by Jean Berstel and Christophe Reutenauer in March 2007 at the Centre de Recherches Mathématiques, Montréal, Canada. Part I represents the first modern and comprehensive exposition of the theory of Christoffel words. Part II presents numerous combinatorial and algorithmic aspects of repetition-free words stemming from the work of Axel Thue-a pioneer in the theory of combinatorics on words. A beginner to the theory of combinatorics on words will be motivated by the numerous examples, and the large variety of exercises, which make the book unique at this level of exposition. The clean and streamlined exposition and the extensive bibliography will also be appreciated. After reading this book, beginners should be ready to read modern research papers in this rapidly growing field and contribute their own research to its development. Experienced readers will be interested in the finitary approach to Sturmian words that Christoffel words offe...

  20. Repeated imitation makes human vocalizations more word-like.

    Science.gov (United States)

    Edmiston, Pierce; Perlman, Marcus; Lupyan, Gary

    2018-03-14

    People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words. © 2018 The Author(s).

  1. How do teaser advertisements boost word of mouth about new products? For consumers, the future is more exciting than the present

    NARCIS (Netherlands)

    Thorbjornsen, H.; Ketelaar, P.E.; Riet, J.P. van 't; Dahlén, M.

    2015-01-01

    Future-framed marketing is highly effective in generating positive product-related word of mouth (WOM) for new products. This was demonstrated in two studies: Study 1 reported a novel online field experiment on WOM behavior; Study 2 tested the proposed WOM effects in a more controlled laboratory

  2. Word form Encoding in Chinese Word Naming and Word Typing

    Science.gov (United States)

    Chen, Jenn-Yeu; Li, Cheng-Yi

    2011-01-01

    The process of word form encoding was investigated in primed word naming and word typing with Chinese monosyllabic words. The target words shared or did not share the onset consonants with the prime words. The stimulus onset asynchrony (SOA) was 100 ms or 300 ms. Typing required the participants to enter the phonetic letters of the target word,…

  3. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    Science.gov (United States)

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  4. Measuring Syntactic Complexity in Spontaneous Spoken Swedish

    Science.gov (United States)

    Roll, Mikael; Frid, Johan; Horne, Merle

    2007-01-01

    Hesitation disfluencies after phonetically prominent stranded function words are thought to reflect the cognitive coding of complex structures. Speech fragments following the Swedish function word "att" "that" were analyzed syntactically, and divided into two groups: one with "att" in disfluent contexts, and the other with "att" in fluent…

  5. Spoken Grammar: Where Are We and Where Are We Going?

    Science.gov (United States)

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  6. Word classes

    DEFF Research Database (Denmark)

    Rijkhoff, Jan

    2007-01-01

    in grammatical descriptions of some 50 languages, which together constitute a representative sample of the world’s languages (Hengeveld et al. 2004: 529). It appears that there are both quantitative and qualitative differences between word class systems of individual languages. Whereas some languages employ...... a parts-of-speech system that includes the categories Verb, Noun, Adjective and Adverb, other languages may use only a subset of these four lexical categories. Furthermore, quite a few languages have a major word class whose members cannot be classified in terms of the categories Verb – Noun – Adjective...... – Adverb, because they have properties that are strongly associated with at least two of these four traditional word classes (e.g. Adjective and Adverb). Finally, this article discusses some of the ways in which word class distinctions interact with other grammatical domains, such as syntax and morphology....

  7. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  8. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  9. Young toddlers' word comprehension is flexible and efficient.

    Directory of Open Access Journals (Sweden)

    Elika Bergelson

    Full Text Available Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children's gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers' spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called "cup" or "juice" and pictures that had only one likely name for toddlers (such as "apple", using a visual world eye-tracking task and a picture-labeling task (n = 77, mean age, 21 months. Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents.

  10. Developmental changes in memorial comparisons: the effects of stimulus presentation mode.

    Science.gov (United States)

    Wright, K P; Berch, D B

    1992-06-01

    First graders, fifth graders, and college students made comparative size judgments of either pictures (line drawings) or names (spoken words) of common objects by designating the "bigger" item in real life. Care was taken to equate the picture and word conditions on a number of critical parameters including method of item-pair presentation and activation of response-time intervals. All groups exhibited a symbolic distance effect. While judgments were faster with pictures than words, the magnitude of the difference did not change with age. Previous research suggesting a marked developmental decline in the magnitude of the "pictorial superiority effect" may have confounded reduced memory demands with stimulus presentation mode for young children. Finally, slopes of the symbolic distance functions were found to decrease with increasing grade level, at least from first to fifth grade. This is the first demonstration of an age-related decline in slopes for magnitude comparisons of concrete objects.

  11. Benefits of augmentative signs in word learning: Evidence from children who are deaf/hard of hearing and children with specific language impairment.

    Science.gov (United States)

    van Berkel-van Hoof, Lian; Hermans, Daan; Knoors, Harry; Verhoeven, Ludo

    2016-12-01

    Augmentative signs may facilitate word learning in children with vocabulary difficulties, for example, children who are Deaf/Hard of Hearing (DHH) and children with Specific Language Impairment (SLI). Despite the fact that augmentative signs may aid second language learning in populations with a typical language development, empirical evidence in favor of this claim is lacking. We aim to investigate whether augmentative signs facilitate word learning for DHH children, children with SLI, and typically developing (TD) children. Whereas previous studies taught children new labels for familiar objects, the present study taught new labels for new objects. In our word learning experiment children were presented with pictures of imaginary creatures and pseudo words. Half of the words were accompanied by an augmentative pseudo sign. The children were tested for their receptive word knowledge. The DHH children benefitted significantly from augmentative signs, but the children with SLI and TD age-matched peers did not score significantly different on words from either the sign or no-sign condition. These results suggest that using Sign-Supported speech in classrooms of bimodal bilingual DHH children may support their spoken language development. The difference between earlier research findings and the present results may be caused by a difference in methodology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Periodic words connected with the Fibonacci words

    Directory of Open Access Journals (Sweden)

    G. M. Barabash

    2016-06-01

    Full Text Available In this paper we introduce two families of periodic words (FLP-words of type 1 and FLP-words of type 2 that are connected with the Fibonacci words and investigated their properties.

  13. Learning words

    DEFF Research Database (Denmark)

    Jaswal, Vikram K.; Hansen, Mikkel

    2006-01-01

    Children tend to infer that when a speaker uses a new label, the label refers to an unlabeled object rather than one they already know the label for. Does this inference reflect a default assumption that words are mutually exclusive? Or does it instead reflect the result of a pragmatic reasoning...... process about what the speaker intended? In two studies, we distinguish between these possibilities. Preschoolers watched as a speaker pointed toward (Study 1) or looked at (Study 2) a familiar object while requesting the referent for a new word (e.g. 'Can you give me the blicket?'). In both studies......, despite the speaker's unambiguous behavioral cue indicating an intent to refer to a familiar object, children inferred that the novel label referred to an unfamiliar object. These results suggest that children expect words to be mutually exclusive even when a speaker provides some kinds of pragmatic...

  14. Effects of Word Frequency and Transitional Probability on Word Reading Durations of Younger and Older Speakers.

    Science.gov (United States)

    Moers, Cornelia; Meyer, Antje; Janse, Esther

    2017-06-01

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.

  15. Ins-Robust Primitive Words

    OpenAIRE

    Srivastava, Amit Kumar; Kapoor, Kalpesh

    2017-01-01

    Let Q be the set of primitive words over a finite alphabet with at least two symbols. We characterize a class of primitive words, Q_I, referred to as ins-robust primitive words, which remain primitive on insertion of any letter from the alphabet and present some properties that characterizes words in the set Q_I. It is shown that the language Q_I is dense. We prove that the language of primitive words that are not ins-robust is not context-free. We also present a linear time algorithm to reco...

  16. Domain-specific and domain-general constraints on word and sequence learning.

    Science.gov (United States)

    Archibald, Lisa M D; Joanisse, Marc F

    2013-02-01

    The relative influences of language-related and memory-related constraints on the learning of novel words and sequences were examined by comparing individual differences in performance of children with and without specific deficits in either language or working memory. Children recalled lists of words in a Hebbian learning protocol in which occasional lists repeated, yielding improved recall over the course of the task on the repeated lists. The task involved presentation of pictures of common nouns followed immediately by equivalent presentations of the spoken names. The same participants also completed a paired-associate learning task involving word-picture and nonword-picture pairs. Hebbian learning was observed for all groups. Domain-general working memory constrained immediate recall, whereas language abilities impacted recall in the auditory modality only. In addition, working memory constrained paired-associate learning generally, whereas language abilities disproportionately impacted novel word learning. Overall, all of the learning tasks were highly correlated with domain-general working memory. The learning of nonwords was additionally related to general intelligence, phonological short-term memory, language abilities, and implicit learning. The results suggest that distinct associations between language- and memory-related mechanisms support learning of familiar and unfamiliar phonological forms and sequences.

  17. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  18. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  19. Sonority and early words

    DEFF Research Database (Denmark)

    Kjærbæk, Laila; Boeg Thomsen, Ditte; Lambertsen, Claus

    2015-01-01

    Syllables play an important role in children’s early language acquisition, and children appear to rely on clear syllabic structures as a key to word acquisition (Vihman 1996; Oller 2000). However, not all languages present children with equally clear cues to syllabic structure, and since the spec......Syllables play an important role in children’s early language acquisition, and children appear to rely on clear syllabic structures as a key to word acquisition (Vihman 1996; Oller 2000). However, not all languages present children with equally clear cues to syllabic structure, and since...... acquisition therefore presents us with the opportunity to examine how children respond to the task of word learning when the input language offers less clear cues to syllabic structure than usually seen. To investigate the sound structure in Danish children’s lexical development, we need a model of syllable......-29 months. For the two children, the phonetic structure of the first ten words to occur is compared with that of the last ten words to occur before 30 months of age, and with that of ten words in between. Measures related to the sonority envelope, viz. sonority types and in particular sonority rises...

  20. English Loan Words in the Speech of Six-Year-Old Navajo Children, with Supplement-Concordance.

    Science.gov (United States)

    Holm, Agnes; And Others

    As part of a study of the feasibility and effect of teaching Navajo children to read their own language first, preliminary data on English loan words in the speech of 6-year-old Navajos were gathered in this study of the language of over 200 children. Taped interviews with these children were analyzed, and a spoken word count of all English words…

  1. Does Set for Variability Mediate the Influence of Vocabulary Knowledge on the Development of Word Recognition Skills?

    Science.gov (United States)

    Tunmer, William E.; Chapman, James W.

    2012-01-01

    This study investigated the hypothesis that vocabulary influences word recognition skills indirectly through "set for variability", the ability to determine the correct pronunciation of approximations to spoken English words. One hundred forty children participating in a 3-year longitudinal study were administered reading and…

  2. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  3. Recurrent Word Combinations in EAP Test-Taker Writing: Differences between High- and Low-Proficiency Levels

    Science.gov (United States)

    Appel, Randy; Wood, David

    2016-01-01

    The correct use of frequently occurring word combinations represents an important part of language proficiency in spoken and written discourse. This study investigates the use of English-language recurrent word combinations in low-level and high-level L2 English academic essays sourced from the Canadian Academic English Language (CAEL) assessment.…

  4. A Spoken English Recognition Expert System.

    Science.gov (United States)

    1983-09-01

    34Speech Recognition by Computer," Scientific American. New York: Scientific American, April 1981: 64-76. 16. Marcus, Mitchell P. A Theo of Syntactic...prob)...) Pcssible words for voice decoder to choose from are: gents dishes issues itches ewes folks foes comunications units eunichs error * farce

  5. Does "Word Coach" Coach Words?

    Science.gov (United States)

    Cobb, Tom; Horst, Marlise

    2011-01-01

    This study reports on the design and testing of an integrated suite of vocabulary training games for Nintendo[TM] collectively designated "My Word Coach" (Ubisoft, 2008). The games' design is based on a wide range of learning research, from classic studies on recycling patterns to frequency studies of modern corpora. Its general usage…

  6. Diminutives facilitate word segmentation in natural speech: cross-linguistic evidence.

    Science.gov (United States)

    Kempe, Vera; Brooks, Patricia J; Gillis, Steven; Samson, Graham

    2007-06-01

    Final-syllable invariance is characteristic of diminutives (e.g., doggie), which are a pervasive feature of the child-directed speech registers of many languages. Invariance in word endings has been shown to facilitate word segmentation (Kempe, Brooks, & Gillis, 2005) in an incidental-learning paradigm in which synthesized Dutch pseudonouns were used. To broaden the cross-linguistic evidence for this invariance effect and to increase its ecological validity, adult English speakers (n=276) were exposed to naturally spoken Dutch or Russian pseudonouns presented in sentence contexts. A forced choice test was given to assess target recognition, with foils comprising unfamiliar syllable combinations in Experiments 1 and 2 and syllable combinations straddling word boundaries in Experiment 3. A control group (n=210) received the recognition test with no prior exposure to targets. Recognition performance improved with increasing final-syllable rhyme invariance, with larger increases for the experimental group. This confirms that word ending invariance is a valid segmentation cue in artificial, as well as naturalistic, speech and that diminutives may aid segmentation in a number of languages.

  7. Word wheels

    CERN Document Server

    Clark, Kathryn

    2013-01-01

    Targeting the specific problems learners have with language structure, these multi-sensory exercises appeal to all age groups including adults. Exercises use sight, sound and touch and are also suitable for English as an Additional Lanaguage and Basic Skills students.Word Wheels includes off-the-shelf resources including lesson plans and photocopiable worksheets, an interactive CD with practice exercises, and support material for the busy teacher or non-specialist staff, as well as homework activities.

  8. Psycholinguistic norms for action photographs in French and their relationships with spoken and written latencies.

    Science.gov (United States)

    Bonin, Patrick; Boyer, Bruno; Méot, Alain; Fayol, Michel; Droit, Sylvie

    2004-02-01

    A set of 142 photographs of actions (taken from Fiez & Tranel, 1997) was standardized in French on name agreement, image agreement, conceptual familiarity, visual complexity, imageability, age of acquisition, and duration of the depicted actions. Objective word frequency measures were provided for the infinitive modal forms of the verbs and for the cumulative frequency of the verbal forms associated with the photographs. Statistics on the variables collected for action items were provided and compared with the statistics on the same variables collected for object items. The relationships between these variables were analyzed, and certain comparisons between the current database and other similar published databases of pictures of actions are reported. Spoken and written naming latencies were also collected for the photographs of actions, and multiple regression analyses revealed that name agreement, image agreement, and age of acquisition are the major determinants of action naming speed. Finally, certain analyses were performed to compare object and action naming times. The norms and the spoken and written naming latencies corresponding to the pictures are available on the Internet (http://www.psy.univ-bpclermont.fr/~pbonin/pbonin-eng.html) and should be of great use to researchers interested in the processing of actions.

  9. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    Science.gov (United States)

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  10. Learning a Practice through Practise: Presenting Knowledge in Doctoral Spoken Presentations

    Science.gov (United States)

    Manidis, Marie; Addo, Rebecca

    2017-01-01

    Learning to "become doctor" requires PhD candidates to undertake progressive public displays--material and social--of knowledge. Knowledge in doctoral pedagogy is primarily realised textually, with speaking and writing remaining as the primary assessment rubrics of progress and of the qualification. Participating textually begins, in a…

  11. Emotion Words Affect Eye Fixations during Reading

    Science.gov (United States)

    Scott, Graham G.; O'Donnell, Patrick J.; Sereno, Sara C.

    2012-01-01

    Emotion words are generally characterized as possessing high arousal and extreme valence and have typically been investigated in paradigms in which they are presented and measured as single words. This study examined whether a word's emotional qualities influenced the time spent viewing that word in the context of normal reading. Eye movements…

  12. Word Domain Disambiguation via Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.

    2006-06-04

    Word subject domains have been widely used to improve the perform-ance of word sense disambiguation al-gorithms. However, comparatively little effort has been devoted so far to the disambiguation of word subject do-mains. The few existing approaches have focused on the development of al-gorithms specific to word domain dis-ambiguation. In this paper we explore an alternative approach where word domain disambiguation is achieved via word sense disambiguation. Our study shows that this approach yields very strong results, suggesting that word domain disambiguation can be ad-dressed in terms of word sense disam-biguation with no need for special purpose algorithms.

  13. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  14. Retinoic acid signaling: a new piece in the spoken language puzzle

    Directory of Open Access Journals (Sweden)

    Jon-Ruben eVan Rhijn

    2015-11-01

    Full Text Available Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders.FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output. We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken language ready brain.

  15. Electronic Control System Of Home Appliances Using Speech Command Words

    Directory of Open Access Journals (Sweden)

    Aye Min Soe

    2015-06-01

    Full Text Available Abstract The main idea of this paper is to develop a speech recognition system. By using this system smart home appliances are controlled by spoken words. The spoken words chosen for recognition are Fan On Fan Off Light On Light Off TV On and TV Off. The input of the system takes speech signals to control home appliances. The proposed system has two main parts speech recognition and smart home appliances electronic control system. Speech recognition is implemented in MATLAB environment. In this process it contains two main modules feature extraction and feature matching. Mel Frequency Cepstral Coefficients MFCC is used for feature extraction. Vector Quantization VQ approach using clustering algorithm is applied for feature matching. In electrical home appliances control system RF module is used to carry command signal from PC to microcontroller wirelessly. Microcontroller is connected to driver circuit for relay and motor. The input commands are recognized very well. The system is a good performance to control home appliances by spoken words.

  16. Grounding word learning in space.

    Directory of Open Access Journals (Sweden)

    Larissa K Samuelson

    Full Text Available Humans and objects, and thus social interactions about objects, exist within space. Words direct listeners' attention to specific regions of space. Thus, a strong correspondence exists between where one looks, one's bodily orientation, and what one sees. This leads to further correspondence with what one remembers. Here, we present data suggesting that children use associations between space and objects and space and words to link words and objects--space binds labels to their referents. We tested this claim in four experiments, showing that the spatial consistency of where objects are presented affects children's word learning. Next, we demonstrate that a process model that grounds word learning in the known neural dynamics of spatial attention, spatial memory, and associative learning can capture the suite of results reported here. This model also predicts that space is special, a prediction supported in a fifth experiment that shows children do not use color as a cue to bind words and objects. In a final experiment, we ask whether spatial consistency affects word learning in naturalistic word learning contexts. Children of parents who spontaneously keep objects in a consistent spatial location during naming interactions learn words more effectively. Together, the model and data show that space is a powerful tool that can effectively ground word learning in social contexts.

  17. The effects of sad prosody on hemispheric specialization for words processing.

    Science.gov (United States)

    Leshem, Rotem; Arzouan, Yossi; Armony-Sivan, Rinat

    2015-06-01

    This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170 ms) and late (240-260 ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. The word concreteness effect occurs for positive, but not negative, emotion words in immediate serial recall.

    Science.gov (United States)

    Tse, Chi-Shing; Altarriba, Jeanette

    2009-02-01

    The present study examined the roles of word concreteness and word valence in the immediate serial recall task. Emotion words (e.g. happy) were used to investigate these effects. Participants completed study-test trials with seven-item study lists consisting of positive or negative words with either high or low concreteness (Experiments 1 and 2) and neutral (i.e. non-emotion) words with either high or low concreteness (Experiment 2). For neutral words, the typical word concreteness effect (concrete words are better recalled than abstract words) was replicated. For emotion words, the effect occurred for positive words, but not for negative words. While the word concreteness effect was stronger for neutral words than for negative words, it was not different for the neutral words and the positive words. We conclude that both word valence and word concreteness simultaneously contribute to the item and order retention of emotion words and discuss how Hulme et al.'s (1997) item redintegration account can be modified to explain these findings.

  19. Don't words come easy? A psychophysical exploration of word superiority

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe Allerup

    2013-01-01

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. We compare performance with letters and words in three experiments, ...... and visual short term memory capacity. So, even if single words come easy, there is a limit to the word superiority effect....

  20. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  1. The determinants of spoken and written picture naming latencies.

    Science.gov (United States)

    Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel

    2002-02-01

    The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.

  2. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2013-08-01

    Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.

  3. Cognitive, Linguistic and Print-Related Predictors of Preschool Children's Word Spelling and Name Writing

    Science.gov (United States)

    Milburn, Trelani F.; Hipfner-Boucher, Kathleen; Weitzman, Elaine; Greenberg, Janice; Pelletier, Janette; Girolametto, Luigi

    2017-01-01

    Preschool children begin to represent spoken language in print long before receiving formal instruction in spelling and writing. The current study sought to identify the component skills that contribute to preschool children's ability to begin to spell words and write their name. Ninety-five preschool children (mean age = 57 months) completed a…

  4. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use over Time

    Science.gov (United States)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    2013-01-01

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson--Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semi-fixed multi-word units (MWUs), which comprise fixed parts with the potential…

  5. Gated Word Recognition by Postlingually Deafened Adults with Cochlear Implants: Influence of Semantic Context

    Science.gov (United States)

    Patro, Chhayakanta; Mendel, Lisa Lucks

    2018-01-01

    Purpose: The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. Method: Listeners with CIs as well as those with normal hearing (NH)…

  6. Age of Acquisition and Sensitivity to Gender in Spanish Word Recognition

    Science.gov (United States)

    Foote, Rebecca

    2014-01-01

    Speakers of gender-agreement languages use gender-marked elements of the noun phrase in spoken-word recognition: A congruent marking on a determiner or adjective facilitates the recognition of a subsequent noun, while an incongruent marking inhibits its recognition. However, while monolinguals and early language learners evidence this…

  7. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use Over Time

    NARCIS (Netherlands)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson-Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semifixed multi-word units (MWUs),

  8. Tracking Eye Movements to Localize Stroop Interference in Naming: Word Planning Versus Articulatory Buffering

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2014-01-01

    Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the

  9. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  10. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    Science.gov (United States)

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  11. Lexical Competition Effects in Aphasia: Deactivation of Lexical Candidates in Spoken Word Processing

    Science.gov (United States)

    Janse, Esther

    2006-01-01

    Research has shown that Broca's and Wernicke's aphasic patients show different impairments in auditory lexical processing. The results of an experiment with form-overlapping primes showed an inhibitory effect of form-overlap for control adults and a weak inhibition trend for Broca's aphasic patients, but a facilitatory effect of form-overlap was…

  12. Deviant ERP Response to Spoken Non-Words among Adolescents Exposed to Cocaine in Utero

    Science.gov (United States)

    Landi, Nicole; Crowley, Michael J.; Wu, Jia; Bailey, Christopher A.; Mayes, Linda C.

    2012-01-01

    Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of…

  13. You had me at "Hello": Rapid extraction of dialect information from spoken words.

    Science.gov (United States)

    Scharinger, Mathias; Monahan, Philip J; Idsardi, William J

    2011-06-15

    Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. The power of the spoken word : Political mobilization and nation-building by Kuyper and Gladstone

    NARCIS (Netherlands)

    Hoekstra, H

    2003-01-01

    This article addresses the question why in the Netherlands it was the orthodox protestants who were able to mobilize the masses and not the political establishments of liberals and enlightened protestants during the latter part of the nineteenth century. The biblical rhetoric of their leader Abraham

  15. Spoken Word Recognition in Children with Autism Spectrum Disorder: The Role of Visual Disengagement

    Science.gov (United States)

    Venker, Courtney E.

    2017-01-01

    Deficits in visual disengagement are one of the earliest emerging differences in infants who are later diagnosed with autism spectrum disorder. Although researchers have speculated that deficits in visual disengagement could have negative effects on the development of children with autism spectrum disorder, we do not know which skills are…

  16. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Acheson, D.J.; Takashima, A.

    2013-01-01

    ulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and

  17. Word Problem Wizardry.

    Science.gov (United States)

    Cassidy, Jack

    1991-01-01

    Presents suggestions for teaching math word problems to elementary students. The strategies take into consideration differences between reading in math and reading in other areas. A problem-prediction game and four self-checking activities are included along with a magic password challenge. (SM)

  18. Words Do Come Easy (Sometimes)

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe Allerup

    multiple stimuli are presented simultaneously: Are words treated as units or wholes in visual short term memory? Using methods based on a Theory of Visual Attention (TVA), we measured perceptual threshold, visual processing speed and visual short term memory capacity for words and letters, in two simple...... a different pattern: Letters are perceived more easily than words, and this is reflected both in perceptual processing speed and short term memory capacity. So even if single words do come easy, they seem to enjoy no advantage in visual short term memory....

  19. The relation of the number of languages spoken to performance in different cognitive abilities in old age.

    Science.gov (United States)

    Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias

    2016-12-01

    Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.

  20. Lexicon Optimization for Dutch Speech Recognition in Spoken Document Retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  1. Lexicon optimization for Dutch speech recognition in spoken document retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.; Dalsgaard, P.; Lindberg, B.; Benner, H.

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  2. Oral and Literate Strategies in Spoken and Written Narratives.

    Science.gov (United States)

    Tannen, Deborah

    1982-01-01

    Discusses comparative analysis of spoken and written versions of a narrative to demonstrate that features which have been identified as characterizing oral discourse are also found in written discourse and that the written short story combines syntactic complexity expected in writing with features which create involvement expected in speaking.…

  3. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    Science.gov (United States)

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  4. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... languages and can be used for the purposes of spoken language identification. Keywords. SLID .... branch of linguistics to study the sound structure of human language. ... countries, work in the area of Indian language identification has not ...... English and speech database has been collected over tele-.

  5. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  6. A memory-based shallow parser for spoken Dutch

    NARCIS (Netherlands)

    Canisius, S.V.M.; van den Bosch, A.; Decadt, B.; Hoste, V.; De Pauw, G.

    2004-01-01

    We describe the development of a Dutch memory-based shallow parser. The availability of large treebanks for Dutch, such as the one provided by the Spoken Dutch Corpus, allows memory-based learners to be trained on examples of shallow parsing taken from the treebank, and act as a shallow parser after

  7. The Malay Lexicon Project: a database of lexical statistics for 9,592 words.

    Science.gov (United States)

    Yap, Melvin J; Liow, Susan J Rickard; Jalil, Sajlia Binte; Faizal, Siti Syuhada Binte

    2010-11-01

    Malay, a language spoken by 250 million people, has a shallow alphabetic orthography, simple syllable structures, and transparent affixation--characteristics that contrast sharply with those of English. In the present article, we first compare the letter-phoneme and letter-syllable ratios for a sample of alphabetic orthographies to highlight the importance of separating language-specific from language-universal reading processes. Then, in order to develop a better understanding of word recognition in orthographies with more consistent mappings to phonology than English, we compiled a database of lexical variables (letter length, syllable length, phoneme length, morpheme length, word frequency, orthographic and phonological neighborhood sizes, and orthographic and phonological Levenshtein distances) for 9,592 Malay words. Separate hierarchical regression analyses for Malay and English revealed how the consistency of orthography-phonology mappings selectively modulates the effects of different lexical variables on lexical decision and speeded pronunciation performance. The database of lexical and behavioral measures for Malay is available at http://brm.psychonomic-journals.org/content/supplemental.

  8. In a Manner of Speaking: Assessing Frequent Spoken Figurative Idioms to Assist ESL/EFL Teachers

    Science.gov (United States)

    Grant, Lynn E.

    2007-01-01

    This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…

  9. Abelian primitive words

    OpenAIRE

    Domaratzki, Michael; Rampersad, Narad

    2011-01-01

    We investigate Abelian primitive words, which are words that are not Abelian powers. We show that unlike classical primitive words, the set of Abelian primitive words is not context-free. We can determine whether a word is Abelian primitive in linear time. Also different from classical primitive words, we find that a word may have more than one Abelian root. We also consider enumeration problems and the relation to the theory of codes. Peer reviewed

  10. Clusters of word properties as predictors of elementary school children's performance on two word tasks

    NARCIS (Netherlands)

    Tellings, A.E.J.M.; Coppens, K.M.; Gelissen, J.P.T.M.; Schreuder, R.

    2013-01-01

    Often, the classification of words does not go beyond "difficult" (i.e., infrequent, late-learned, nonimageable, etc.) or "easy" (i.e., frequent, early-learned, imageable, etc.) words. In the present study, we used a latent cluster analysis to divide 703 Dutch words with scores for eight word

  11. Can pictures speak a thousand words in understanding climate change?

    Science.gov (United States)

    Walton, P.

    2017-12-01

    Pictures are able to engage, inspire and educate people in a way that the spoken or written word cannot, and with 21st Century technology we now have even more ways to present images. Researchers and campaigners working in climate change have used the power of images to great effect, bringing the issue of a warming planet into stark relief through iconic scenes such as the forlorn polar bear adrift on an iceberg. Whilst undeniably successful, this image has now become passé and invisible necessitating the scientific community to identify new ways to engage and educate the general public. This paper reports on a new high resolution visualisation app that has been developed by the European Space Agency to illustrate the change over time of a number of climate variables. Data, collected via satellite Earth observations, have been rendered into visually stunning animations that can be interrogated in a number of ways to allow the user to understand the spatial and temporal changes of that variable. But is it enough? Can it ever be that all that glisters really is gold?

  12. Tone of voice guides word learning in informative referential contexts.

    Science.gov (United States)

    Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C

    2013-06-01

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., "daxen") spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.

  13. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  14. Recurrent Partial Words

    Directory of Open Access Journals (Sweden)

    Francine Blanchet-Sadri

    2011-08-01

    Full Text Available Partial words are sequences over a finite alphabet that may contain wildcard symbols, called holes, which match or are compatible with all letters; partial words without holes are said to be full words (or simply words. Given an infinite partial word w, the number of distinct full words over the alphabet that are compatible with factors of w of length n, called subwords of w, refers to a measure of complexity of infinite partial words so-called subword complexity. This measure is of particular interest because we can construct partial words with subword complexities not achievable by full words. In this paper, we consider the notion of recurrence over infinite partial words, that is, we study whether all of the finite subwords of a given infinite partial word appear infinitely often, and we establish connections between subword complexity and recurrence in this more general framework.

  15. Spoken language outcomes after hemispherectomy: factoring in etiology.

    Science.gov (United States)

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  16. Exploring the word superiority effect using TVA

    DEFF Research Database (Denmark)

    Starrfelt, Randi

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. It is unclear, however, if this is due to a lower threshold for perc...... simultaneously we find a different pattern: In a whole report experiment with six stimuli (letters or words), letters are perceived more easily than words, and this is reflected both in perceptual processing speed and short term memory capacity....... for perception of words, or a higher speed of processing for words than letters. We have investigated the WSE using methods based on a Theory of Visual Attention. In an experiment using single stimuli (words or letters) presented centrally, we show that the classical WSE is specifically reflected in perceptual...

  17. A Word Count of Modern Arabic Prose.

    Science.gov (United States)

    Landau, Jacob M.

    This book presents a word count of Arabic prose based on 60 twentieth-century Egyptian books. The text is divided into an alphabetical list and a word frequency list. This word count is intended as an aid in the: (1) writing of primers and the compilation of graded readers, (2) examination of the vocabulary selection of primers and readers…

  18. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words.

    Science.gov (United States)

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants' performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words.

  19. SPOKEN CUZCO QUECHUA, UNITS 1-6.

    Science.gov (United States)

    SOLA, DONALD F.; AND OTHERS

    THE MATERIALS IN THIS VOLUME COMPRISE SIX UNITS WHICH PRESENT BASIC ASPECTS OF CUZCO QUECHUA PHONOLOGY, MORPHOLOGY, AND SYNTAX FOR THE BEGINNING STUDENT. THE SIX UNITS ARE DESIGNED FOR APPROXIMATELY 120 HOURS OF SUPERVISED CLASS WORK WITH OUTSIDE PREPARATION EXPECTED OF THE STUDENT. EACH UNIT CONSISTS OF A DIALOGUE TO BE MEMORIZED, A DIALOGUE…

  20. Auditory Memory Distortion for Spoken Prose

    Science.gov (United States)

    Hutchison, Joanna L.; Hubbard, Timothy L.; Ferrandino, Blaise; Brigante, Ryan; Wright, Jamie M.; Rypma, Bart

    2012-01-01

    Observers often remember a scene as containing information that was not presented but that would have likely been located just beyond the observed boundaries of the scene. This effect is called "boundary extension" (BE; e.g., Intraub & Richardson, 1989). Previous studies have observed BE in memory for visual and haptic stimuli, and…

  1. Neural correlates of conflict between gestures and words: A domain-specific role for a temporal-parietal complex.

    Science.gov (United States)

    Noah, J Adam; Dravida, Swethasri; Zhang, Xian; Yahil, Shaul; Hirsch, Joy

    2017-01-01

    The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no") were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down), or head shaking (side-to-side), thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS) were increased in the right dorsolateral prefrontal cortex (DLPFC) for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ), superior temporal gyrus (STG), supramarginal gyrus (SMG), and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to the left

  2. Neural correlates of conflict between gestures and words: A domain-specific role for a temporal-parietal complex.

    Directory of Open Access Journals (Sweden)

    J Adam Noah

    Full Text Available The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no" were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down, or head shaking (side-to-side, thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS were increased in the right dorsolateral prefrontal cortex (DLPFC for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ, superior temporal gyrus (STG, supramarginal gyrus (SMG, and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to

  3. Early access to lexical-level phonological representations of Mandarin word-forms : evidence from auditory N1 habituation

    NARCIS (Netherlands)

    Yue, Jinxing; Alter, Kai; Howard, David; Bastiaanse, Roelien

    2017-01-01

    An auditory habituation design was used to investigate whether lexical-level phonological representations in the brain can be rapidly accessed after the onset of a spoken word. We studied the N1 component of the auditory event-related electrical potential, and measured the amplitude decrements of N1

  4. It's a Mad, Mad Wordle: For a New Take on Text, Try This Fun Word Cloud Generator

    Science.gov (United States)

    Foote, Carolyn

    2009-01-01

    Nation. New. Common. Generation. These are among the most frequently used words spoken by President Barack Obama in his January 2009 inauguration speech as seen in a fascinating visual display called a Wordle. Educators, too, can harness the power of Wordle to enhance learning. Imagine providing students with a whole new perspective on…

  5. Universal Lyndon Words

    OpenAIRE

    Carpi, Arturo; Fici, Gabriele; Holub, Stepan; Oprsal, Jakub; Sciortino, Marinella

    2014-01-01

    A word $w$ over an alphabet $\\Sigma$ is a Lyndon word if there exists an order defined on $\\Sigma$ for which $w$ is lexicographically smaller than all of its conjugates (other than itself). We introduce and study \\emph{universal Lyndon words}, which are words over an $n$-letter alphabet that have length $n!$ and such that all the conjugates are Lyndon words. We show that universal Lyndon words exist for every $n$ and exhibit combinatorial and structural properties of these words. We then defi...

  6. A Few Words about Words | Poster

    Science.gov (United States)

    By Ken Michaels, Guest Writer In Shakepeare’s play “Hamlet,” Polonius inquires of the prince, “What do you read, my lord?” Not at all pleased with what he’s reading, Hamlet replies, “Words, words, words.”1 I have previously described the communication model in which a sender encodes a message and then sends it via some channel (or medium) to a receiver, who decodes the message

  7. Effects of word width and word length on optimal character size for reading of horizontally scrolling Japanese words

    Directory of Open Access Journals (Sweden)

    Wataru eTeramoto

    2016-02-01

    Full Text Available The present study investigated whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of 4 Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants’ performance exceeded 88.9% correct rate was calculated for each character size (0.3, 0.6, 1.0, and 3.0° and scroll window size (5 or 10 character spaces. Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word. Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (3, 4, and 6 character words. Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length.

  8. Brain activation during word identification and word recognition

    DEFF Research Database (Denmark)

    Jernigan, Terry L.; Ostergaard, Arne L.; Law, Ian

    1998-01-01

    Previous memory research has suggested that the effects of prior study observed in priming tasks are functionally, and neurobiologically, distinct phenomena from the kind of memory expressed in conventional (explicit) memory tests. Evidence for this position comes from observed dissociations...... between memory scores obtained with the two kinds of tasks. However, there is continuing controversy about the meaning of these dissociations. In recent studies, Ostergaard (1998a, Memory Cognit. 26:40-60; 1998b, J. Int. Neuropsychol. Soc., in press) showed that simply degrading visual word stimuli can...... dramatically alter the degree to which word priming shows a dissociation from word recognition; i.e., effects of a number of factors on priming paralleled their effects on recognition memory tests when the words were degraded at test. In the present study, cerebral blood flow changes were measured while...

  9. The Plausibility of Tonal Evolution in the Malay Dialect Spoken in Thailand: Evidence from an Acoustic Study

    Directory of Open Access Journals (Sweden)

    Phanintra Teeranon

    2007-12-01

    Full Text Available The F0 values of vowels following voiceless consonants are higher than those of vowels following voiced consonants; high vowels have a higher F0 than low vowels. It has also been found that when high vowels follow voiced consonants, the F0 values decrease. In contrast, low vowels following voiceless consonants show increasing F0 values. In other words, the voicing of initial consonants has been found to counterbalance the intrinsic F0 values of high and low vowels (House and Fairbanks 1953, Lehiste and Peterson 1961, Lehiste 1970, Laver 1994, Teeranon 2006. To test whether these three findings are applicable to a disyllabic language, the F0 values of high and low vowels following voiceless and voiced consonants were studied in a Malay dialect of the Austronesian language family spoken in Pathumthani Province, Thailand. The data was collected from three male informants, aged 30-35. The Praat program was used for acoustic analysis. The findings revealed the influence of the voicing of initial consonants on the F0 of vowels to be greater than that of the influence of vowel height. Evidence from this acoustic study shows the plausibility for the Malay dialect spoken in Pathumthani to become a tonal language by the influence of initial consonants rather by the influence of the high-low vowel dimension.

  10. Visual recognition of permuted words

    Science.gov (United States)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  11. METONYMY BASED ON CULTURAL BACKGROUND KNOWLEDGE AND PRAGMATIC INFERENCING: EVIDENCE FROM SPOKEN DISCOURSE

    Directory of Open Access Journals (Sweden)

    Arijana Krišković

    2009-01-01

    Full Text Available Th e characterization of metonymy as a conceptual tool for guiding inferencing in language has opened a new fi eld of study in cognitive linguistics and pragmatics. To appreciate the value of metonymy for pragmatic inferencing, metonymy should not be viewed as performing only its prototypical referential function. Metonymic mappings are operative in speech acts at the level of reference, predication, proposition and illocution. Th e aim of this paper is to study the role of metonymy in pragmatic inferencing in spoken discourse in televison interviews. Case analyses of authentic utterances classifi ed as illocutionary metonymies following the pragmatic typology of metonymic functions are presented. Th e inferencing processes are facilitated by metonymic connections existing between domains or subdomains in the same functional domain. It has been widely accepted by cognitive linguists that universal human knowledge and embodiment are essential for the interpretation of metonymy. Th is analysis points to the role of cultural background knowledge in understanding target meanings. All these aspects of metonymic connections are exploited in complex inferential processes in spoken discourse. In most cases, metaphoric mappings are also a part of utterance interpretation.

  12. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    Science.gov (United States)

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  13. The role of syllabic structure in French visual word recognition.

    Science.gov (United States)

    Rouibah, A; Taft, M

    2001-03-01

    Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.

  14. Why do participants initiate free recall of short lists of words with the first list item? Toward a general episodic memory explanation.

    Science.gov (United States)

    Spurgeon, Jessica; Ward, Geoff; Matthews, William J

    2014-11-01

    Participants who are presented with a short list of words for immediate free recall (IFR) show a strong tendency to initiate their recall with the 1st list item and then proceed in forward serial order. We report 2 experiments that examined whether this tendency was underpinned by a short-term memory store, of the type that is argued by some to underpin recency effects in IFR. In Experiment 1, we presented 3 groups of participants with lists of between 2 and 12 words for IFR, delayed free recall, and continuous-distractor free recall. The to-be-remembered words were simultaneously spoken and presented visually, and the distractor task involved silently solving a series of self-paced, visually presented mathematical equations (e.g., 3 + 2 + 4 = ?). The tendency to initiate recall at the start of short lists was greatest in IFR but was also present in the 2 other recall conditions. This finding was replicated in Experiment 2, where the to-be-remembered items were presented visually in silence and the participants spoke aloud their answers to computer-paced mathematical equations. Our results necessitate that a short-term buffer cannot be fully responsible for the tendency to initiate recall from the beginning of a short list; rather, they suggest that the tendency represents a general property of episodic memory that occurs across a range of time scales. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  15. Emotion words and categories: evidence from lexical decision

    OpenAIRE

    Scott, Graham; O'Donnell, Patrick; Sereno, Sara C.

    2014-01-01

    We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion–frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency nega...

  16. Cascading activation from lexical processing to letter-level processing in written word production.

    Science.gov (United States)

    Buchwald, Adam; Falconer, Carolyn

    2014-01-01

    Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.

  17. Words, shape, visual search and visual working memory in 3-year-old children.

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  18. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  19. On universal partial words

    OpenAIRE

    Chen, Herman Z. Q.; Kitaev, Sergey; Mütze, Torsten; Sun, Brian Y.

    2016-01-01

    A universal word for a finite alphabet $A$ and some integer $n\\geq 1$ is a word over $A$ such that every word in $A^n$ appears exactly once as a subword (cyclically or linearly). It is well-known and easy to prove that universal words exist for any $A$ and $n$. In this work we initiate the systematic study of universal partial words. These are words that in addition to the letters from $A$ may contain an arbitrary number of occurrences of a special `joker' symbol $\\Diamond\

  20. Word 2013 for dummies

    CERN Document Server

    Gookin, Dan

    2013-01-01

    This bestselling guide to Microsoft Word is the first and last word on Word 2013 It's a whole new Word, so jump right into this book and learn how to make the most of it. Bestselling For Dummies author Dan Gookin puts his usual fun and friendly candor back to work to show you how to navigate the new features of Word 2013. Completely in tune with the needs of the beginning user, Gookin explains how to use Word 2013 quickly and efficiently so that you can spend more time working on your projects and less time trying to figure it all out. Walks you through the capabilit

  1. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  2. Positive words or negative words: whose valence strength are we more sensitive to?

    Science.gov (United States)

    Yang, Jiemin; Zeng, Jing; Meng, Xianxin; Zhu, Liping; Yuan, Jiajin; Li, Hong; Yusoff, Nasir

    2013-10-02

    The present study investigates the human brains' sensitivity to the valence strength of emotionally positive and negative chinese words. Event-Related Potentials were recorded, in two different experimental sessions, for Highly Positive (HP), Mildly Positive (MP) and neutral (NP) words and for Highly Negative (HN), Mildly Negative (MN) and neutral (NN) words, while subjects were required to count the number of words, irrespective of word meanings. The results showed a significant emotion effect in brain potentials for both HP and MP words, and the emotion effect occurred faster for HP words than MP words: HP words elicited more negative deflections than NP words in N2 (250-350 ms) and P3 (350-500 ms) amplitudes, while MP words elicited a significant emotion effect in P3, but not in N2, amplitudes. By contrast, HN words elicited larger amplitudes than NN words in N2 but not in P3 amplitudes, whereas MN words produced no significant emotion effect across N2 and P3 components. Moreover, the size of emotion-neutral differences in P3 amplitudes was significantly larger for MP compared to MN words. Thus, the human brain is reactive to both highly and mildly positive words, and this reactivity increased with the positive valence strength of the words. Conversely, the brain is less reactive to the valence of negative relative to positive words. These results suggest that human brains are equipped with increased sensitivity to the valence strength of positive compared to negative words, a type of emotional stimuli that are well known for reduced arousal. © 2013 Elsevier B.V. All rights reserved.

  3. Document image retrieval through word shape coding.

    Science.gov (United States)

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  4. Textual, Genre and Social Features of Spoken Grammar: A Corpus-Based Approach

    Directory of Open Access Journals (Sweden)

    Carmen Pérez-Llantada

    2009-02-01

    Full Text Available This paper describes a corpus-based approach to teaching and learning spoken grammar for English for Academic Purposes with reference to Bhatia’s (2002 multi-perspective model for discourse analysis: a textual perspective, a genre perspective and a social perspective. From a textual perspective, corpus-informed instruction helps students identify grammar items through statistical frequencies, collocational patterns, context-sensitive meanings and discoursal uses of words. From a genre perspective, corpus observation provides students with exposure to recurrent lexico-grammatical patterns across different academic text types (genres. From a social perspective, corpus models can be used to raise learners’ awareness of how speakers’ different discourse roles, discourse privileges and power statuses are enacted in their grammar choices. The paper describes corpus-based instructional procedures, gives samples of learners’ linguistic output, and provides comments on the students’ response to this method of instruction. Data resulting from the assessment process and student production suggest that corpus-informed instruction grounded in Bhatia’s multi-perspective model can constitute a pedagogical approach in order to i obtain positive student responses from input and authentic samples of grammar use, ii help students identify and understand the textual, genre and social aspects of grammar in real contexts of use, and therefore iii help develop students’ ability to use grammar accurately and appropriately.

  5. Listening in circles. Spoken drama and the architects of sound, 1750-1830.

    Science.gov (United States)

    Tkaczyk, Viktoria

    2014-07-01

    The establishment of the discipline of architectural acoustics is generally attributed to the physicist Wallace Clement Sabine, who developed the formula for reverberation time around 1900, and with it the possibility of making calculated prognoses about the acoustic potential of a particular design. If, however, we shift the perspective from the history of this discipline to the history of architectural knowledge and praxis, it becomes apparent that the topos of 'good sound' had already entered the discourse much earlier. This paper traces the Europe-wide discussion on theatre architecture between 1750 and 1830. It will be shown that the period of investigation is marked by an increasing interest in auditorium acoustics, one linked to the emergence of a bourgeois theatre culture and the growing socio-political importance of the spoken word. In the wake of this development the search among architects for new methods of acoustic research started to differ fundamentally from an analogical reasoning on the nature of sound propagation and reflection, which in part dated back to antiquity. Through their attempts to find new ways of visualising the behaviour of sound in enclosed spaces and to rethink both the materiality and the mediality of theatre auditoria, architects helped pave the way for the establishment of architectural acoustics as an academic discipline around 1900.

  6. Food words distract the hungry: Evidence of involuntary semantic processing of task-irrelevant but biologically-relevant unexpected auditory words.

    Science.gov (United States)

    Parmentier, Fabrice B R; Pacheco-Unguetti, Antonia P; Valero, Sara

    2018-01-01

    Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants' biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants' biological needs.

  7. From Word Alignment to Word Senses, via Multilingual Wordnets

    Directory of Open Access Journals (Sweden)

    Dan Tufis

    2006-05-01

    Full Text Available Most of the successful commercial applications in language processing (text and/or speech dispense with any explicit concern on semantics, with the usual motivations stemming from the computational high costs required for dealing with semantics, in case of large volumes of data. With recent advances in corpus linguistics and statistical-based methods in NLP, revealing useful semantic features of linguistic data is becoming cheaper and cheaper and the accuracy of this process is steadily improving. Lately, there seems to be a growing acceptance of the idea that multilingual lexical ontologisms might be the key towards aligning different views on the semantic atomic units to be used in characterizing the general meaning of various and multilingual documents. Depending on the granularity at which semantic distinctions are necessary, the accuracy of the basic semantic processing (such as word sense disambiguation can be very high with relatively low complexity computing. The paper substantiates this statement by presenting a statistical/based system for word alignment and word sense disambiguation in parallel corpora. We describe a word alignment platform which ensures text pre-processing (tokenization, POS-tagging, lemmatization, chunking, sentence and word alignment as required by an accurate word sense disambiguation.

  8. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  9. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  10. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Support for an auto-associative model of spoken cued recall: evidence from fMRI.

    Science.gov (United States)

    de Zubicaray, Greig; McMahon, Katie; Eastburn, Mathew; Pringle, Alan J; Lorenz, Lina; Humphreys, Michael S

    2007-03-02

    Cued recall and item recognition are considered the standard episodic memory retrieval tasks. However, only the neural correlates of the latter have been studied in detail with fMRI. Using an event-related fMRI experimental design that permits spoken responses, we tested hypotheses from an auto-associative model of cued recall and item recognition [Chappell, M., & Humphreys, M. S. (1994). An auto-associative neural network for sparse representations: Analysis and application to models of recognition and cued recall. Psychological Review, 101, 103-128]. In brief, the model assumes that cues elicit a network of phonological short term memory (STM) and semantic long term memory (LTM) representations distributed throughout the neocortex as patterns of sparse activations. This information is transferred to the hippocampus which converges upon the item closest to a stored pattern and outputs a response. Word pairs were learned from a study list, with one member of the pair serving as the cue at test. Unstudied words were also intermingled at test in order to provide an analogue of yes/no recognition tasks. Compared to incorrectly rejected studied items (misses) and correctly rejected (CR) unstudied items, correctly recalled items (hits) elicited increased responses in the left hippocampus and neocortical regions including the left inferior prefrontal cortex (LIPC), left mid lateral temporal cortex and inferior parietal cortex, consistent with predictions from the model. This network was very similar to that observed in yes/no recognition studies, supporting proposals that cued recall and item recognition involve common rather than separate mechanisms.

  12. Applicability of the Spoken Knowledge in Low Literacy Patients with Diabetes in Brazilian elderly.

    Science.gov (United States)

    Souza, Jonas Gordilho; Apolinario, Daniel; Farfel, José Marcelo; Jaluul, Omar; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Campora, Flávia; Jacob-Filho, Wilson

    2016-01-01

    To translate, adapt and evaluate the properties of a Brazilian Portuguese version of the Spoken Knowledge in Low Literacy Patients with Diabetes, which is a questionnaire that evaluate diabetes knowledge. A cross-sectional study with type 2 diabetes patients aged ≥60 years, seen at a public healthcare organization in the city of Sao Paulo (SP). After the development of the Portuguese version, we evaluated the psychometrics properties and the association with sociodemographic and clinical variables. The regression models were adjusted for sociodemographic data, functional health literacy, duration of disease, use of insulin, and glycemic control. We evaluated 129 type 2 diabetic patients, with mean age of 75.9 (±6.2) years, mean scholling of 5.2 (±4.4) years, mean glycosylated hemoglobin of 7.2% (±1.4), and mean score on Spoken Knowledge in Low Literacy Patients with Diabetes of 42.1% (±25.8). In the regression model, the variables independently associated to Spoken Knowledge in Low Literacy Patients with Diabetes were schooling (B=0.193; p=0.003), use of insulin (B=1.326; p=0.004), duration of diabetes (B=0.053; p=0.022) and health literacy (B=0.108; p=0.021). The determination coefficient was 0.273. The Cronbach a was 0.75, demonstrating appropriate internal consistency. This translated version of the Spoken Knowledge in Low Literacy Patients with Diabetes showed to be adequate to evaluate diabetes knowledge in elderly patients with low schooling levels. It presented normal distribution, adequate internal consistency, with no ceiling or floor effect. The tool is easy to be used, can be quickly applied and does not depend on reading skills. Traduzir, adaptar e avaliar as propriedades de uma versão, em português do Brasil, do Spoken Knowledge in Low Literacy Patients with Diabetes, um questionário que avalia conhecimento em diabetes. Estudo transversal, em diabéticos tipo 2, com idade ≥60 anos de uma instituição pública de saúde, em São Paulo (SP

  13. Understanding Medical Words

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Understanding Medical Words Past Issues / Summer 2009 Table of Contents For ... Medicine that teaches you about many of the words related to your health care Do you have ...

  14. Fast mapping of novel word forms traced neurophysiologically

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2011-11-01

    Full Text Available Human capacity to quickly learn new words, critical for our ability to communicate using language, is well-known from behavioural studies and observations, but its neural underpinnings remain unclear. In this study, we have used event-related potentials to record brain activity to novel spoken word forms as they are being learnt by the human nervous system through passive auditory exposure. We found that the brain response dynamics change dramatically within the short (20 min exposure session: as the subjects become familiarised with the novel word forms, the early (~100 ms fronto-central activity they elicit increases in magnitude and becomes similar to that of known real words. At the same time, acoustically similar real words used as control stimuli show a relatively stable response throughout the recording session; these differences between the stimulus groups are confirmed using both factorial and linear regression analyses. Furthermore, acoustically matched novel non-speech stimuli do not demonstrate similar response increase, suggesting neural specificity of this rapid learning phenomenon to linguistic stimuli. Left-lateralised perisylvian cortical networks appear to be underlying such fast mapping of novel word forms unto the brain’s mental lexicon.

  15. Words and melody are intertwined in perception of sung words: EEG and behavioral evidence.

    Directory of Open Access Journals (Sweden)

    Reyna L Gordon

    Full Text Available Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.

  16. Attention and Gaze Control in Picture Naming, Word Reading, and Word Categorizing

    Science.gov (United States)

    Roelofs, Ardi

    2007-01-01

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture-word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to…

  17. Attention and gaze control in picture naming, word reading, and word categorizing

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2007-01-01

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment

  18. THE SPECIAL STATUS OF EXOGENOUS WORD-FORMATION WITHIN THE GERMAN WORD-FORMATION SYSTEM

    OpenAIRE

    Zhilyuk Sergey Aleksandrovich

    2014-01-01

    The article presents the properties of exogenous word-formation system taking into account the existence of two word-formation systems in modern German. On the basis of foreign research which reveal modern trends in German word-formation connected with the internationalization and the development of new European Latin language. The author defines key features of exogenous word-formation, i.e. foreign origin of wordformation units, unmotivated units, unmotivated interchange in base and affixes...

  19. WordPress Bible

    CERN Document Server

    Brazell, Aaron

    2010-01-01

    The WordPress Bible provides a complete and thorough guide to the largest self hosted blogging tool. This guide starts by covering the basics of WordPress such as installing and the principles of blogging, marketing and social media interaction, but then quickly ramps the reader up to more intermediate to advanced level topics such as plugins, WordPress Loop, themes and templates, custom fields, caching, security and more. The WordPress Bible is the only complete resource one needs to learning WordPress from beginning to end.

  20. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  1. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...

  2. Spoken sentence comprehension in aphasia: Event-related potential evidence for a lexical integration deficit

    NARCIS (Netherlands)

    Swaab, T.Y.; Brown, C.; Hagoort, P.

    1997-01-01

    In this study the N400 component of the event-related potential was used to investigate spoken sentence understanding in Broca's and Wernicke's aphasics. The aim of the study was to determine whether spoken sentence comprehension problems in these patients might result from a deficit in the on-line

  3. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  4. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  5. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  6. SUBTLEX-ESP: Spanish Word Frequencies Based on Film Subtitles

    Science.gov (United States)

    Cuetos, Fernando; Glez-Nosti, Maria; Barbon, Analia; Brysbaert, Marc

    2011-01-01

    Recent studies have shown that word frequency estimates obtained from films and television subtitles are better to predict performance in word recognition experiments than the traditional word frequency estimates based on books and newspapers. In this study, we present a subtitle-based word frequency list for Spanish, one of the most widely spoken…

  7. Processing Electromyographic Signals to Recognize Words

    Science.gov (United States)

    Jorgensen, C. C.; Lee, D. D.

    2009-01-01

    A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.

  8. Combinatorics of compositions and words

    CERN Document Server

    Heubach, Silvia

    2009-01-01

    A One-Stop Source of Known Results, a Bibliography of Papers on the Subject, and Novel Research Directions Focusing on a very active area of research in the last decade, Combinatorics of Compositions and Words provides an introduction to the methods used in the combinatorics of pattern avoidance and pattern enumeration in compositions and words. It also presents various tools and approaches that are applicable to other areas of enumerative combinatorics. After a historical perspective on research in the area, the text introduces techniques to solve recurrence relations, including iteration and generating functions. It then focuses on enumeration of basic statistics for compositions. The text goes on to present results on pattern avoidance for subword, subsequence, and generalized patterns in compositions and then applies these results to words. The authors also cover automata, the ECO method, generating trees, and asymptotic results via random compositions and complex analysis. Highlighting both established a...

  9. Different Neural Correlates of Emotion-Label Words and Emotion-Laden Words: An ERP Study

    Directory of Open Access Journals (Sweden)

    Juan Zhang

    2017-09-01

    Full Text Available It is well-documented that both emotion-label words (e.g., sadness, happiness and emotion-laden words (e.g., death, wedding can induce emotion activation. However, the neural correlates of emotion-label words and emotion-laden words recognition have not been examined. The present study aimed to compare the underlying neural responses when processing the two kinds of words by employing event-related potential (ERP measurements. Fifteen Chinese native speakers were asked to perform a lexical decision task in which they should judge whether a two-character compound stimulus was a real word or not. Results showed that (1 emotion-label words and emotion-laden words elicited similar P100 at the posteriors sites, (2 larger N170 was found for emotion-label words than for emotion-laden words at the occipital sites on the right hemisphere, and (3 negative emotion-label words elicited larger Late Positivity Complex (LPC on the right hemisphere than on the left hemisphere while such effect was not found for emotion-laden words and positive emotion-label words. The results indicate that emotion-label words and emotion-laden words elicit different cortical responses at both early (N170 and late (LPC stages. In addition, right hemisphere advantage for emotion-label words over emotion-laden words can be observed in certain time windows (i.e., N170 and LPC while fails to be detected in some other time window (i.e., P100. The implications of the current findings for future emotion research were discussed.

  10. Different Neural Correlates of Emotion-Label Words and Emotion-Laden Words: An ERP Study.

    Science.gov (United States)

    Zhang, Juan; Wu, Chenggang; Meng, Yaxuan; Yuan, Zhen

    2017-01-01

    It is well-documented that both emotion-label words (e.g., sadness, happiness) and emotion-laden words (e.g., death, wedding) can induce emotion activation. However, the neural correlates of emotion-label words and emotion-laden words recognition have not been examined. The present study aimed to compare the underlying neural responses when processing the two kinds of words by employing event-related potential (ERP) measurements. Fifteen Chinese native speakers were asked to perform a lexical decision task in which they should judge whether a two-character compound stimulus was a real word or not. Results showed that (1) emotion-label words and emotion-laden words elicited similar P100 at the posteriors sites, (2) larger N170 was found for emotion-label words than for emotion-laden words at the occipital sites on the right hemisphere, and (3) negative emotion-label words elicited larger Late Positivity Complex (LPC) on the right hemisphere than on the left hemisphere while such effect was not found for emotion-laden words and positive emotion-label words. The results indicate that emotion-label words and emotion-laden words elicit different cortical responses at both early (N170) and late (LPC) stages. In addition, right hemisphere advantage for emotion-label words over emotion-laden words can be observed in certain time windows (i.e., N170 and LPC) while fails to be detected in some other time window (i.e., P100). The implications of the current findings for future emotion research were discussed.

  11. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  12. The effect of written text on comprehension of spoken English as a foreign language.

    Science.gov (United States)

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.

  13. Clusters of word properties as predictors of elementary school children’s performance on two word tasks

    NARCIS (Netherlands)

    Tellings, A.E.J.M.; Coppens, K.; Gelissen, J.P.T.M.; Schreuder, R.

    2013-01-01

    Often, the classification of words does not go beyond “difficult” (i.e., infrequent, late-learned, nonimageable, etc.) or “easy” (i.e., frequent, early-learned, imageable, etc.) words. In the present study, we used a latent cluster analysis to divide 703 Dutch words with scores for eight word

  14. Competition between multiple words for a referent in cross-situational word learning

    Science.gov (United States)

    Benitez, Viridiana L.; Yurovsky, Daniel; Smith, Linda B.

    2016-01-01

    Three experiments investigated competition between word-object pairings in a cross-situational word-learning paradigm. Adults were presented with One-Word pairings, where a single word labeled a single object, and Two-Word pairings, where two words labeled a single object. In addition to measuring learning of these two pairing types, we measured competition between words that refer to the same object. When the word-object co-occurrences were presented intermixed in training (Experiment 1), we found evidence for direct competition between words that label the same referent. Separating the two words for an object in time eliminated any evidence for this competition (Experiment 2). Experiment 3 demonstrated that adding a linguistic cue to the second label for a referent led to different competition effects between adults who self-reported different language learning histories, suggesting both distinctiveness and language learning history affect competition. Finally, in all experiments, competition effects were unrelated to participants’ explicit judgments of learning, suggesting that competition reflects the operating characteristics of implicit learning processes. Together, these results demonstrate that the role of competition between overlapping associations in statistical word-referent learning depends on time, the distinctiveness of word-object pairings, and language learning history. PMID:27087742

  15. Directed forgetting: Comparing pictures and words.

    Science.gov (United States)

    Quinlan, Chelsea K; Taylor, Tracy L; Fawcett, Jonathan M

    2010-03-01

    The authors investigated directed forgetting as a function of the stimulus type (picture, word) presented at study and test. In an item-method directed forgetting task, study items were presented 1 at a time, each followed with equal probability by an instruction to remember or forget. Participants exhibited greater yes-no recognition of remember than forget items for each of the 4 study-test conditions (picture-picture, picture-word, word-word, word-picture). However, this difference was significantly smaller when pictures were studied than when words were studied. This finding demonstrates that the magnitude of the directed forgetting effect can be reduced by high item memorability, such as when the picture superiority effect is operating. This suggests caution in using pictures at study when the goal of an experiment is to examine potential group differences in the magnitude of the directed forgetting effect. 2010 APA, all rights reserved.

  16. Spoken commands control robot that handles radioactive materials

    International Nuclear Information System (INIS)

    Phelan, P.F.; Keddy, C.; Beugelsdojk, T.J.

    1989-01-01

    Several robotic systems have been developed by Los Alamos National Laboratory to handle radioactive material. Because of safety considerations, the robotic system must be under direct human supervision and interactive control continuously. In this paper, we describe the implementation of a voice-recognition system that permits this control, yet allows the robot to perform complex preprogrammed manipulations without the operator's intervention. To provide better interactive control, we connected to the robot's control computer, a speech synthesis unit, which provides audible feedback to the operator. Thus upon completion of a task or if an emergency arises, an appropriate spoken message can be reported by the control computer. The training programming and operation of this commercially available system are discussed, as are the practical problems encountered during operations

  17. Computational Interpersonal Communication: Communication Studies and Spoken Dialogue Systems

    Directory of Open Access Journals (Sweden)

    David J. Gunkel

    2016-09-01

    Full Text Available With the advent of spoken dialogue systems (SDS, communication can no longer be considered a human-to-human transaction. It now involves machines. These mechanisms are not just a medium through which human messages pass, but now occupy the position of the other in social interactions. But the development of robust and efficient conversational agents is not just an engineering challenge. It also depends on research in human conversational behavior. It is the thesis of this paper that communication studies is best situated to respond to this need. The paper argues: 1 that research in communication can supply the information necessary to respond to and resolve many of the open problems in SDS engineering, and 2 that the development of SDS applications can provide the discipline of communication with unique opportunities to test extant theory and verify experimental results. We call this new area of interdisciplinary collaboration “computational interpersonal communication” (CIC

  18. Attentional Processing and Recall of Emotional Words

    OpenAIRE

    Fraga Carou, Isabel; Redondo, Jaime; Piñeiro, Ana; Padrón, Isabel; Fernández-Rey, José; Alcaraz, Miguel

    2011-01-01

    Three experiments were carried out in order to evaluate the attention paid to words of different emotional value. A dual-task experimental paradigm was employed, registering response times to acoustic tones which were presented during the reading of words. The recall was also evaluated by means of an intentional immediate recall test. The results reveal that neither the emotional valence nor the arousal of words on their own affected the attention paid by participants. Only in the third exper...

  19. Perception of words and pitch patterns in song and speech

    Directory of Open Access Journals (Sweden)

    Julia eMerrill

    2012-03-01

    Full Text Available This fMRI study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words, pitch and rhythm. Univariate and multivariate analyses were performed on the brain activity patterns of six conditions, arranged in a subtractive hierarchy: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; as well as the pure musical or speech rhythm.Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG and intraparietal sulcus (IPS in processing song and speech. The left IFG was involved in word- and pitch-related processing in speech, the right IFG in processing pitch in song.Furthermore, the IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the activity of IFG and IPS.

  20. Impressive Words: Linguistic Predictors of Public Approval of the U.S. Congress

    OpenAIRE

    Decter-Frain, Ari; Frimer, Jeremy A.

    2016-01-01

    What type of language makes the most positive impression within a professional setting? Is competent/agentic language or warm/communal language more effective at eliciting social approval? We examined this basic social cognitive question in a real world context using a big data approach—the recent record-low levels of public approval of the U.S. Congress. Using Linguistic Inquiry and Word Count (LIWC), we text analyzed all 123+ million words spoken by members of the U.S. House of Representa...

  1. Word Pocket Guide

    CERN Document Server

    Glenn, Walter

    2004-01-01

    Millions of people use Microsoft Word every day and, chances are, you're one of them. Like most Word users, you've attained a certain level of proficiency--enough to get by, with a few extra tricks and tips--but don't get the opportunity to probe much further into the real power of Word. And Word is so rich in features that regardless of your level of expertise, there's always more to master. If you've ever wanted a quick answer to a nagging question or had the thought that there must be a better way, then this second edition of Word Pocket Guide is just what you need. Updated for Word 2003

  2. Nurturing a lexical legacy: reading experience is critical for the development of word reading skill

    Science.gov (United States)

    Nation, Kate

    2017-12-01

    The scientific study of reading has taught us much about the beginnings of reading in childhood, with clear evidence that the gateway to reading opens when children are able to decode, or `sound out' written words. Similarly, there is a large evidence base charting the cognitive processes that characterise skilled word recognition in adults. Less understood is how children develop word reading expertise. Once basic reading skills are in place, what factors are critical for children to move from novice to expert? This paper outlines the role of reading experience in this transition. Encountering individual words in text provides opportunities for children to refine their knowledge about how spelling represents spoken language. Alongside this, however, reading experience provides much more than repeated exposure to individual words in isolation. According to the lexical legacy perspective, outlined in this paper, experiencing words in diverse and meaningful language environments is critical for the development of word reading skill. At its heart is the idea that reading provides exposure to words in many different contexts, episodes and experiences which, over time, sum to a rich and nuanced database about their lexical history within an individual's experience. These rich and diverse encounters bring about local variation at the word level: a lexical legacy that is measurable during word reading behaviour, even in skilled adults.

  3. Lacquered Words: The Evolution of Vietnamese under Sinitic Influences from the 1st Century B.C.E. through the 17th Century C.E.

    Science.gov (United States)

    Phan, John Duong

    2013-01-01

    As much as three quarters of the modern Vietnamese lexicon is of Chinese origin. The majority of these words are often assumed to have originated in much the same manner as late Sino-Korean and Sino-Japanese borrowed forms: by rote memorization of reading glosses that were acquired through limited exposure to spoken Sinitic. However, under closer…

  4. Electronic Word of Behavior

    DEFF Research Database (Denmark)

    Kunst, Katrine; Vatrapu, Ravi; Hussain, Abid

    2017-01-01

    In this research in progress-paper, we introduce the notion of ‘Electronic Word of Behavior’ (eWOB) to describe the phenomenon of consumers’ product-related behaviors increasingly made observable by online social environments. We employ Observational Learning theory to conceptualize the notion of e......WOB and generate hypotheses about how consumers influence each other by means of behavior in online social environments. We present a conceptual framework for categorizing eWOB, and propose a novel research design for a randomized controlled field experiment. Specifically, the ongoing experiment aims to analyze...... how the presence of individual-specific behavior-based social information in a movie streaming service affects potential users’ attitude towards and intentions to use the service....

  5. Clinical Strategies for Sampling Word Recognition Performance.

    Science.gov (United States)

    Schlauch, Robert S; Carney, Edward

    2018-04-17

    Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list. The PB max simulations were conducted on a "client" with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance. A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score. A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

  6. The Involvement of Morphological Information in the Memorization of Chinese Compound Words: Evidence from Memory Errors

    Science.gov (United States)

    Liu, Duo

    2016-01-01

    The processing of morphological information during Chinese word memorization was investigated in the present study. Participants were asked to study words presented to them on a computer screen in the studying phase and then judge whether presented words were old or new in the test phase. In addition to parent words (i.e. the words studied in the…

  7. More than a Word Cloud

    Science.gov (United States)

    Filatova, Olga

    2016-01-01

    Word cloud generating applications were originally designed to add visual attractiveness to posters, websites, slide show presentations, and the like. They can also be an effective tool in reading and writing classes in English as a second language (ESL) for all levels of English proficiency. They can reduce reading time and help to improve…

  8. Word/sub-word lattices decomposition and combination for speech recognition

    OpenAIRE

    Le , Viet-Bac; Seng , Sopheap; Besacier , Laurent; Bigi , Brigitte

    2008-01-01

    International audience; This paper presents the benefit of using multiple lexical units in the post-processing stage of an ASR system. Since the use of sub-word units can reduce the high out-of-vocabulary rate and improve the lack of text resources in statistical language modeling, we propose several methods to decompose, normalize and combine word and sub-word lattices generated from different ASR systems. By using a sub-word information table, every word in a lattice can be decomposed into ...

  9. Different Neural Correlates of Emotion-Label Words and Emotion-Laden Words: An ERP Study

    OpenAIRE

    Zhang, Juan; Wu, Chenggang; Meng, Yaxuan; Yuan, Zhen

    2017-01-01

    It is well-documented that both emotion-label words (e.g., sadness, happiness) and emotion-laden words (e.g., death, wedding) can induce emotion activation. However, the neural correlates of emotion-label words and emotion-laden words recognition have not been examined. The present study aimed to compare the underlying neural responses when processing the two kinds of words by employing event-related potential (ERP) measurements. Fifteen Chinese native speakers were asked to perform a lexical...

  10. Baby's first 10 words.

    Science.gov (United States)

    Tardif, Twila; Fletcher, Paul; Liang, Weilan; Zhang, Zhixiang; Kaciroti, Niko; Marchman, Virginia A

    2008-07-01

    Although there has been much debate over the content of children's first words, few large sample studies address this question for children at the very earliest stages of word learning. The authors report data from comparable samples of 265 English-, 336 Putonghua- (Mandarin), and 369 Cantonese-speaking 8- to 16-month-old infants whose caregivers completed MacArthur-Bates Communicative Development Inventories and reported them to produce between 1 and 10 words. Analyses of individual words indicated striking commonalities in the first words that children learn. However, substantive cross-linguistic differences appeared in the relative prevalence of common nouns, people terms, and verbs as well as in the probability that children produced even one of these word types when they had a total of 1-3, 4-6, or 7-10 words in their vocabularies. These data document cross-linguistic differences in the types of words produced even at the earliest stages of vocabulary learning and underscore the importance of parental input and cross-linguistic/cross-cultural variations in children's early word-learning.

  11. Word 2010 Bible

    CERN Document Server

    Tyson, Herb

    2010-01-01

    In-depth guidance on Word 2010 from a Microsoft MVP. Microsoft Word 2010 arrives with many changes and improvements, and this comprehensive guide from Microsoft MVP Herb Tyson is your expert, one-stop resource for it all. Master Word's new features such as a new interface and customized Ribbon, major new productivity-boosting collaboration tools, how to publish directly to blogs, how to work with XML, and much more. Follow step-by-step instructions and best practices, avoid pitfalls, discover practical workarounds, and get the very most out of your new Word 2010 with this packed guide. Coverag

  12. Usable, Real-Time, Interactive Spoken Language Systems

    Science.gov (United States)

    1994-09-01

    Similarly, we included derivations (mostly plurals and possessives) of many open-class words in the domnain. We also added about 400 concatenated word...UueraiCe’l~ usinig a system of’ ’realization 1111C, %%. hiCh map) thle gr-aimmlatcal relation anl argumlent bears to the head onto thle semantic relatio ...syntactic categories as well. Representations of this form contain significantly more internal structure than specialized sublanguage models. This can be

  13. Novel word retention in sequential bilingual children.

    Science.gov (United States)

    Kan, Pui Fong

    2014-03-01

    Children's ability to learn and retain new words is fundamental to their vocabulary development. This study examined word retention in children learning a home language (L1) from birth and a second language (L2) in preschool settings. Participants were presented with sixteen novel words in L1 and in L2 and were tested for retention after either a 2-month or a 4-month delay. Results showed that children retained more words in L1 than in L2 for both of the retention interval conditions. In addition, children's word retention was associated with their existing language knowledge and their fast-mapping performance within and across language. The patterns of association, however, were different between L1 and L2. These findings suggest that children's word retention might be related to the interactions of various components that are operating within a dynamic system.

  14. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    Science.gov (United States)

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  15. The Relationships among Cognitive Correlates and Irregular Word, Non-Word, and Word Reading

    Science.gov (United States)

    Abu-Hamour, Bashir; University, Mu'tah; Urso, Annmarie; Mather, Nancy

    2012-01-01

    This study explored four hypotheses: (a) the relationships among rapid automatized naming (RAN) and processing speed (PS) to irregular word, non-word, and word reading; (b) the predictive power of various RAN and PS measures, (c) the cognitive correlates that best predicted irregular word, non-word, and word reading, and (d) reading performance of…

  16. Word of Jeremiah - Word of God

    DEFF Research Database (Denmark)

    Holt, Else Kragelund

    2007-01-01

    The article examines the relationship between God, prophet and the people in the Book of Jeremiah. The analysis shows a close connection, almost an identification, between the divine word (and consequently God himself) and the prophet, so that the prophet becomes a metaphor for God. This is done...

  17. Research on the Relationship between English Majors’ Learning Motivation and Spoken English in Chinese Context

    Institute of Scientific and Technical Information of China (English)

    陆佳佳

    2014-01-01

    With the increasing importance attached to spoken English, it is of great significance to find how the motivation of English majors affects their oral English learning outcomes. Based on the research results and theoretical frameworks of the previous studies on this area, this paper carries out research in Zhujiang College of South China Agricultural University trying to find out the types of motivation and the correlation between motivation factors of English majors and their spoken English, and thus to guide spoken English learning and teaching.

  18. [Representation of letter position in visual word recognition process].

    Science.gov (United States)

    Makioka, S

    1994-08-01

    Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.

  19. RECEPTION OF SPOKEN ENGLISH. MISHEARINGS IN THE LANGUAGE OF BUSINESS AND LAW

    Directory of Open Access Journals (Sweden)

    HOREA Ioana-Claudia

    2013-07-01

    Full Text Available Spoken English may sometimes cause us to face a peculiar problem in respect of the reception and the decoding of auditive signals, which might lead to mishearings. Risen from erroneous perception, from a lack in understanding the communication and an involuntary mental replacement of a certain element or structure by a more familiar one, these mistakes are most frequently encountered in the case of listening to songs, where the melodic line can facilitate the development of confusion by its somewhat altered intonation, which produces the so called mondegreens. Still, instances can be met in all domains of verbal communication, as proven in several examples noticed during classes of English as a foreign language (EFL taught to non-philological subjects. Production and perceptions of language depend on a series of elements that influence the encoding and the decoding of the message. These filters belong to both psychological and semantic categories which can either interfere with the accuracy of emission and reception. Poor understanding of a notion or concept combined with a more familiar relation with a similarly sounding one will result in unconsciously picking the structure which is better known. This means ‘hearing’ something else than it had been said, something closer to the receiver’s preoccupations and baggage of knowledge than the original structure or word. Some mishearings become particularly relevant as they concern teaching English for Specific Purposes (ESP. Such are those encountered during classes of Business English or in English for Law. Though not very likely to occur too often, given an intuitively felt inaccuracy - as the terms are known by the users to need to be more specialised -, such examples are still not ignorable. Thus, we consider they deserve a higher degree of attention, as they might become quite relevant in the global context of an increasing work force migration and a spread of multinational companies.

  20. Differential cognitive processing of Kanji and Kana words: do orthographic and semantic codes function in parallel in word matching task.

    Science.gov (United States)

    Kawakami, A; Hatta, T; Kogure, T

    2001-12-01

    Relative engagements of the orthographic and semantic codes in Kanji and Hiragana word recognition were investigated. In Exp. 1, subjects judged whether the pairs of Kanji words (prime and target) presented sequentially were physically identical to each other in the word condition. In the sentence condition, subjects decided whether the target word was valid for the prime sentence presented in advance. The results showed that the response times to the target swords orthographically similar (to the prime) were significantly slower than to semantically related target words in the word condition and that this was also the case in the sentence condition. In Exp. 2, subjects judged whether the target word written in Hiragana was physically identical to the prime word in the word condition. In the sentence condition, subjects decided if the target word was valid for the previously presented prime sentence. Analysis indicated that response times to orthographically similar words were slower than to semantically related words in the word condition but not in the sentence condition wherein the response times to the semantically and orthographically similar words were largely the same. Based on these results, differential contributions of orthographic and semantic codes in cognitive processing of Japanese Kanji and Hiragana words was discussed.