WorldWideScience

Sample records for visual word processing

  1. From word superiority to word inferiority: Visual processing of letters and words in pure alexia

    DEFF Research Database (Denmark)

    Habekost, Thomas; Petersen, Anders; Behrmann, Marlene

    2014-01-01

    Visual processing and naming of individual letters and short words were investigated in four patients with pure alexia. To test processing at different levels, the same stimuli were studied across a naming task and a visual perception task. The normal word superiority effect was eliminated in bot...

  2. Visual word form familiarity and attention in lateral difference during processing Japanese Kana words.

    Science.gov (United States)

    Nakagawa, A; Sukigara, M

    2000-09-01

    The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.

  3. Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.

    Science.gov (United States)

    Yoshizaki, K

    2001-12-01

    The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.

  4. The impact of inverted text on visual word processing: An fMRI study.

    Science.gov (United States)

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. [Representation of letter position in visual word recognition process].

    Science.gov (United States)

    Makioka, S

    1994-08-01

    Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.

  6. A dual-task investigation of automaticity in visual word processing

    Science.gov (United States)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  7. Modulation of human extrastriate visual processing by selective attention to colours and words.

    Science.gov (United States)

    Nobre, A C; Allison, T; McCarthy, G

    1998-07-01

    The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.

  8. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    Science.gov (United States)

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  9. Vernier But Not Grating Acuity Contributes to an Early Stage of Visual Word Processing.

    Science.gov (United States)

    Tan, Yufei; Tong, Xiuhong; Chen, Wei; Weng, Xuchu; He, Sheng; Zhao, Jing

    2018-03-28

    The process of reading words depends heavily on efficient visual skills, including analyzing and decomposing basic visual features. Surprisingly, previous reading-related studies have almost exclusively focused on gross aspects of visual skills, while only very few have investigated the role of finer skills. The present study filled this gap and examined the relations of two finer visual skills measured by grating acuity (the ability to resolve periodic luminance variations across space) and Vernier acuity (the ability to detect/discriminate relative locations of features) to Chinese character-processing as measured by character form-matching and lexical decision tasks in skilled adult readers. The results showed that Vernier acuity was significantly correlated with performance in character form-matching but not visual symbol form-matching, while no correlation was found between grating acuity and character processing. Interestingly, we found no correlation of the two visual skills with lexical decision performance. These findings provide for the first time empirical evidence that the finer visual skills, particularly as reflected in Vernier acuity, may directly contribute to an early stage of hierarchical word processing.

  10. Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language

    Science.gov (United States)

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2017-01-01

    The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…

  11. The Development of Spatial Configuration Processing of Visual Word Forms

    Directory of Open Access Journals (Sweden)

    Chienhui Kao

    2011-05-01

    Full Text Available The analysis of spatial relationship, or configuration, among the components of a character is important for visual word form recognition (Kao et al., 2010. We investigated such spatial configuration processing in dyslexics and developing populations. Four types of characters: real- and non-characters and their upside-down versions were used in this study. The task of the observers was to determine whether two characters presented on the display were identical. One group of dyslexic children (Dys and two groups of non-dyslexic controls, one (RL matched Dys in reading performance and the other (CA matched in age, were recruited in this study. Dys performed significantly worse than the control groups for all character types, suggesting a worse visual word form processing in dyslexics. For Dys and CA, the proportional correct response for the upright real characters was better than that for their upside-down versions. RL, (which was younger showed the same effect for the non-characters. Since the non-characters disrupts the global configuration while the inverted characters disrupted both local and global configurations, our results suggest that younger children recognize a word with an analysis of the local configuration while older children, regardless of whether they are dyslexics or not, analyze the global configuration.

  12. Bilinguals Have Different Hemispheric Lateralization in Visual Word Processing from Monolinguals

    Directory of Open Access Journals (Sweden)

    Sze-Man Lam

    2011-05-01

    Full Text Available Previous bilingual studies showed reduced hemispheric asymmetry in visual tasks such as face perception in bilinguals compared with monolinguals, suggesting experience in reading one or two languages could be a modulating factor. Here we examined whether difference in hemispheric asymmetry in visual tasks can also be observed in bilinguals who have different language backgrounds. We compared the behavior of three language groups in a tachistoscopic English word sequential matching task: English monolinguals (or alphabetic monolinguals, A-Ms, bilinguals with an alphabetic-L1 and English-L2 (alphabetic-alphabetic bilinguals, AA-Bs, and bilinguals with Chinese-L1 and English-L2 (logographic-alphabetic bilinguals, LA-Bs. The results showed that AA-Bs had a stronger right visual field/ left hemispheric (LH advantage than A-Ms and LA-Bs, suggesting that different language learning experiences can influence how visual words are processed in the brain. In addition, we showed that this effect could be accounted for by a computational model that implements a theory of hemispheric asymmetry in perception (i.e., the Double Filtering by Frequency theory; Ivry & Robertson, 1998; the modeling data suggested that this difference may be due to both the difference in participants' vocabulary size and the difference in word-to-sound mapping between alphabetic and logographic languages.

  13. Visual Word Recognition Across the Adult Lifespan

    Science.gov (United States)

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  14. Visual word recognition across the adult lifespan.

    Science.gov (United States)

    Cohen-Shikora, Emily R; Balota, David A

    2016-08-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult life span and across a large set of stimuli (N = 1,187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgment). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the word recognition system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly because of sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using 3 different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Activation of extrastriate and frontal cortical areas by visual words and word-like stimuli

    International Nuclear Information System (INIS)

    Petersen, S.E.; Fox, P.T.; Snyder, A.Z.; Raichle, M.E.

    1990-01-01

    Visual presentation of words activates extrastriate regions of the occipital lobes of the brain. When analyzed by positron emission tomography (PET), certain areas in the left, medial extrastriate visual cortex were activated by visually presented pseudowords that obey English spelling rules, as well as by actual words. These areas were not activated by nonsense strings of letters or letter-like forms. Thus visual word form computations are based on learned distinctions between words and nonwords. In addition, during passive presentation of words, but not pseudowords, activation occurred in a left frontal area that is related to semantic processing. These findings support distinctions made in cognitive psychology and computational modeling between high-level visual and semantic computations on single words and describe the anatomy that may underlie these distinctions

  16. Do handwritten words magnify lexical effects in visual word recognition?

    Science.gov (United States)

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  17. Processing of visual semantic information to concrete words : temporal dynamics and neural mechanisms indicated by event-related brain potentials

    NARCIS (Netherlands)

    van Schie, Hein T.; Wijers, Albertus A.; Mars, Rogier B.; Benjamins, Jeroen S.; Stowe, Laurie A.

    2005-01-01

    Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that

  18. Processing of visual semantic information to concrete words: temporal dynamics and neural mechanisms indicated by event-related brain potentials

    NARCIS (Netherlands)

    Schie, H.T. van; Wijers, A.A.; Mars, R.B.; Benjamins, J.S.; Stowe, L.A.

    2005-01-01

    Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that

  19. Lexical-Semantic Processing and Reading: Relations between Semantic Priming, Visual Word Recognition and Reading Comprehension

    Science.gov (United States)

    Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli

    2016-01-01

    The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…

  20. Processing of visual semantic information to concrete words: temporal dynamics and neural mechanisms indicated by event-related brain potentials( ).

    Science.gov (United States)

    van Schie, Hein T; Wijers, Albertus A; Mars, Rogier B; Benjamins, Jeroen S; Stowe, Laurie A

    2005-05-01

    Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that involved 5 s retention of simple 4-angled polygons (load 1), complex 10-angled polygons (load 2), and a no-load baseline condition. During the polygon retention interval subjects were presented with a lexical decision task to auditory presented concrete (imageable) and abstract (nonimageable) words, and pseudowords. ERP results are consistent with the use of object working memory for the visualisation of concrete words. Our data indicate a two-step processing model of visual semantics in which visual descriptive information of concrete words is first encoded in semantic memory (indicated by an anterior N400 and posterior occipital positivity), and is subsequently visualised via the network for object working memory (reflected by a left frontal positive slow wave and a bilateral occipital slow wave negativity). Results are discussed in the light of contemporary models of semantic memory.

  1. The role of native-language phonology in the auditory word identification and visual word recognition of Russian-English bilinguals.

    Science.gov (United States)

    Shafiro, Valeriy; Kharkhurin, Anatoliy V

    2009-03-01

    Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic categorization of words containing four phonological vowel contrasts (/i/-/u/,/I/-/A/,/i/-/I/,/epsilon/-/ae/). Experiment 2 assessed auditory identification accuracy of words containing these four contrasts. Both bilingual groups demonstrated reduced accuracy in auditory identification of two English vowel contrasts absent in their native phonology (/i/-/I/,epsilon/-/ae/). For late- bilinguals, auditory identification difficulty was accompanied by poor visual word recognition for one difficult contrast (/i/-/I/). Bilinguals' visual word recognition moderately correlated with their auditory identification of difficult contrasts. These results indicate that native language phonology can play a role in visual processing of second language words. However, this effect may be considerably constrained by orthographic systems of specific languages.

  2. Syllabic Length Effect in Visual Word Recognition

    Directory of Open Access Journals (Sweden)

    Roya Ranjbar Mohammadi

    2014-07-01

    Full Text Available Studies on visual word recognition have resulted in different and sometimes contradictory proposals as Multi-Trace Memory Model (MTM, Dual-Route Cascaded Model (DRC, and Parallel Distribution Processing Model (PDP. The role of the number of syllables in word recognition was examined by the use of five groups of English words and non-words. The reaction time of the participants to these words was measured using reaction time measuring software. The results indicated that there was syllabic effect on recognition of both high and low frequency words. The pattern was incremental in terms of syllable number. This pattern prevailed in high and low frequency words and non-words except in one syllable words. In general, the results are in line with the PDP model which claims that a single processing mechanism is used in both words and non-words recognition. In other words, the findings suggest that lexical items are mainly processed via a lexical route.  A pedagogical implication of the findings would be that reading in English as a foreign language involves analytical processing of the syllable of the words.

  3. Teach yourself visually Word 2013

    CERN Document Server

    Marmel, Elaine

    2013-01-01

    Get up to speed on the newest version of Word with visual instruction Microsoft Word is the standard for word processing programs, and the newest version offers additional functionality you'll want to use. Get up to speed quickly and easily with the step-by-step instructions and full-color screen shots in this popular guide! You'll see how to perform dozens of tasks, including how to set up and format documents and text; work with diagrams, charts, and pictures; use Mail Merge; post documents online; and much more. Easy-to-follow, two-page lessons make learning a snap.Full-

  4. The role of visual acuity and segmentation cues in compound word identification

    Directory of Open Access Journals (Sweden)

    Jukka eHyönä

    2012-06-01

    Full Text Available Studies are reviewed that demonstrate how the foveal area of the eye constrains how compound words are identified during reading. When compound words are short, their letters can be identified during a single fixation, leading to the whole-word route dominating word recognition from early on. Hence, visually marking morpheme boundaries by hyphens slows down processing by encouraging morphological decomposition when holistic processing is a feasible option. In contrast, the decomposition route dominates the early stages of identifying long compound words. Thus, visual marking of morpheme boundaries facilitates processing of long compound words, unless the initial fixation made on the word lands very close to the morpheme boundary. The reviewed pattern of results is explained by the visual acuity principle (Bertram & Hyönä, 2003 and the dual-route framework of morphological processing.

  5. An ERP investigation of visual word recognition in syllabary scripts.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2013-06-01

    The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.

  6. Searching for the right word: Hybrid visual and memory search for words.

    Science.gov (United States)

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order

  7. The Influence of Orthographic Neighborhood Density and Word Frequency on Visual Word Recognition: Insights from RT Distributional Analyses

    Directory of Open Access Journals (Sweden)

    Stephen Wee Hun eLim

    2016-03-01

    Full Text Available The effects of orthographic neighborhood density and word frequency in visual word recognition were investigated using distributional analyses of response latencies in visual lexical decision. Main effects of density and frequency were observed in mean latencies. Distributional analyses, in addition, revealed a density x frequency interaction: for low-frequency words, density effects were mediated predominantly by distributional shifting whereas for high-frequency words, density effects were absent except at the slower RTs, implicating distributional skewing. The present findings suggest that density effects in low-frequency words reflect processes involved in early lexical access, while the effects observed in high-frequency words reflect late postlexical checking processes.

  8. Rapid modulation of spoken word recognition by visual primes.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  9. A dual-route perspective on brain activation in response to visual words: evidence for a length by lexicality interaction in the visual word form area (VWFA).

    Science.gov (United States)

    Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz

    2010-02-01

    Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., "Does xxx sound like an existing word?") presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  10. Teach yourself visually WordPress

    CERN Document Server

    Majure, Janet

    2012-01-01

    Get your blog up and running with the latest version of WordPress WordPress is one of the most popular, easy-to-use blogging platforms and allows you to create a dynamic and engaging blog, even if you have no programming skills or experience. Ideal for the visual learner, Teach Yourself VISUALLY WordPress, Second Edition introduces you to the exciting possibilities of the newest version of WordPress and helps you get started, step by step, with creating and setting up a WordPress site. Author and experienced WordPress user Janet Majure shares advice, insight, and best practices for taking full

  11. Generating descriptive visual words and visual phrases for large-scale image applications.

    Science.gov (United States)

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  12. The temporal dynamics of implicit processing of non-letter, letter, and word-forms in the human visual cortex

    Directory of Open Access Journals (Sweden)

    Lawrence Gregory Appelbaum

    2009-11-01

    Full Text Available The decoding of visually presented line segments into letters, and letters into words, is critical to fluent reading abilities. Here we investigate the temporal dynamics of visual orthographic processes, focusing specifically on right hemisphere contributions and interactions between the hemispheres involved in the implicit processing of visually presented words, consonants, false fonts, and symbolic strings. High-density EEG was recorded while participants detected infrequent, simple, perceptual targets (dot strings embedded amongst a of character strings. Beginning at 130ms, orthographic and non-orthographic stimuli were distinguished by a sequence of ERP effects over occipital recording sites. These early latency occipital effects were dominated by enhanced right-sided negative-polarity activation for non-orthographic stimuli that peaked at around 180ms. This right-sided effect was followed by bilateral positive occipital activity for false-fonts, but not symbol strings. Moreover the size of components of this later positive occipital wave was inversely correlated with the right-sided ROcc180 wave, suggesting that subjects who had larger early right-sided activation for non-orthographic stimuli had less need for more extended bilateral (e.g. interhemispheric processing of those stimuli shortly later. Additional early (130-150ms negative-polarity activity over left occipital cortex and longer-latency centrally distributed responses (>300ms were present, likely reflecting implicit activation of the previously reported ‘visual-word-form’ area and N400-related responses, respectively. Collectively, these results provide a close look at some relatively unexplored portions of the temporal flow of information processing in the brain related to the implicit processing of potentially linguistic information and provide valuable information about the interactions between hemispheres supporting visual orthographic processing.

  13. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  14. Hemispheric asymmetry in holistic processing of words.

    Science.gov (United States)

    Ventura, Paulo; Delgado, João; Ferreira, Miguel; Farinha-Fernandes, António; Guerreiro, José C; Faustino, Bruno; Leite, Isabel; Wong, Alan C-N

    2018-05-13

    Holistic processing has been regarded as a hallmark of face perception, indicating the automatic and obligatory tendency of the visual system to process all face parts as a perceptual unit rather than in isolation. Studies involving lateralized stimulus presentation suggest that the right hemisphere dominates holistic face processing. Holistic processing can also be shown with other categories such as words and thus it is not specific to faces or face-like expertize. Here, we used divided visual field presentation to investigate the possibly different contributions of the two hemispheres for holistic word processing. Observers performed same/different judgment on the cued parts of two sequentially presented words in the complete composite paradigm. Our data indicate a right hemisphere specialization for holistic word processing. Thus, these markers of expert object recognition are domain general.

  15. Caffeine improves left hemisphere processing of positive words.

    Science.gov (United States)

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.

  16. Caffeine improves left hemisphere processing of positive words.

    Directory of Open Access Journals (Sweden)

    Lars Kuchinke

    Full Text Available A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.

  17. Music reading expertise modulates hemispheric lateralization in English word processing but not in Chinese character processing.

    Science.gov (United States)

    Li, Sara Tze Kwan; Hsiao, Janet Hui-Wen

    2018-07-01

    Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. The automatic visual simulation of words: A memory reactivated mask slows down conceptual access.

    Science.gov (United States)

    Rey, Amandine E; Riou, Benoit; Vallet, Guillaume T; Versace, Rémy

    2017-03-01

    How do we represent the meaning of words? The present study assesses whether access to conceptual knowledge requires the reenactment of the sensory components of a concept. The reenactment-that is, simulation-was tested in a word categorisation task using an innovative masking paradigm. We hypothesised that a meaningless reactivated visual mask should interfere with the simulation of the visual dimension of concrete words. This assumption was tested in a paradigm in which participants were not aware of the link between the visual mask and the words to be processed. In the first phase, participants created a tone-visual mask or tone-control stimulus association. In the test phase, they categorised words that were presented with 1 of the tones. Results showed that words were processed more slowly when they were presented with the reactivated mask. This interference effect was only correlated with and explained by the value of the visual perceptual strength of the words (i.e., our experience with the visual dimensions associated with concepts) and not with other characteristics. We interpret these findings in terms of word access, which may involve the simulation of sensory features associated with the concept, even if participants were not explicitly required to access visual properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  20. Selective visual attention to emotional words: Early parallel frontal and visual activations followed by interactive effects in visual cortex.

    Science.gov (United States)

    Schindler, Sebastian; Kissler, Johanna

    2016-10-01

    Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Visual information constrains early and late stages of spoken-word recognition in sentence context.

    Science.gov (United States)

    Brunellière, Angèle; Sánchez-García, Carolina; Ikumi, Nara; Soto-Faraco, Salvador

    2013-07-01

    Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Visual recognition of permuted words

    Science.gov (United States)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  3. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    Science.gov (United States)

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We

  4. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    Science.gov (United States)

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  5. Modulation of brain activity by multiple lexical and word form variables in visual word recognition: A parametric fMRI study.

    Science.gov (United States)

    Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann

    2008-09-01

    Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.

  6. The impact of task demand on visual word recognition.

    Science.gov (United States)

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Words, shape, visual search and visual working memory in 3-year-old children.

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  8. Teach yourself visually complete WordPress

    CERN Document Server

    Majure, Janet

    2013-01-01

    Take your WordPress skills to the next level with these tips, tricks, and tasks Congratulations on getting your blog up and running with WordPress! Now are you ready to take it to the next level? Teach Yourself VISUALLY Complete WordPress takes you beyond the blogging basics with expanded tips, tricks, and techniques with clear, step-by-step instructions accompanied by screen shots. This visual book shows you how to incorporate forums, use RSS, obtain and review analytics, work with tools like Google AdSense, and much more.Shows you how to use mobile tools to edit a

  9. Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.

    Science.gov (United States)

    Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric

    2013-01-04

    It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.

  10. Right-hemispheric processing of non-linguistic word features

    DEFF Research Database (Denmark)

    Baumgaertner, Annette; Hartwigsen, Gesa; Roman Siebner, Hartwig

    2013-01-01

    -hemispheric homologues of classic left-hemispheric language areas may partly be due to processing nonlinguistic perceptual features of verbal stimuli. We used functional MRI (fMRI) to clarify the role of the right hemisphere in the perception of nonlinguistic word features in healthy individuals. Participants made...... perceptual, semantic, or phonological decisions on the same set of auditorily and visually presented word stimuli. Perceptual decisions required judgements about stimulus-inherent changes in font size (visual modality) or fundamental frequency contour (auditory modality). The semantic judgement required......, the right inferior frontal gyrus (IFG), an area previously suggested to support language recovery after left-hemispheric stroke, displayed modality-independent activation during perceptual processing of word stimuli. Our findings indicate that activation of the right hemisphere during language tasks may...

  11. On the Functional Neuroanatomy of Visual Word Processing: Effects of Case and Letter Deviance

    Science.gov (United States)

    Kronbichler, Martin; Klackl, Johannes; Richlan, Fabio; Schurz, Matthias; Staffen, Wolfgang; Ladurner, Gunther; Wimmer, Heinz

    2009-01-01

    This functional magnetic resonance imaging study contrasted case-deviant and letter-deviant forms with familiar forms of the same phonological words (e.g., "TaXi" and "Taksi" vs. "Taxi") and found that both types of deviance led to increased activation in a left occipito-temporal region, corresponding to the visual word form area (VWFA). The…

  12. Why do pictures, but not visual words, reduce older adults' false memories?

    Science.gov (United States)

    Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R

    2015-09-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  13. Visual processing of words in a patient with visual form agnosia: a behavioural and fMRI study.

    Science.gov (United States)

    Cavina-Pratesi, Cristiana; Large, Mary-Ellen; Milner, A David

    2015-03-01

    Patient D.F. has a profound and enduring visual form agnosia due to a carbon monoxide poisoning episode suffered in 1988. Her inability to distinguish simple geometric shapes or single alphanumeric characters can be attributed to a bilateral loss of cortical area LO, a loss that has been well established through structural and functional fMRI. Yet despite this severe perceptual deficit, D.F. is able to "guess" remarkably well the identity of whole words. This paradoxical finding, which we were able to replicate more than 20 years following her initial testing, raises the question as to whether D.F. has retained specialized brain circuitry for word recognition that is able to function to some degree without the benefit of inputs from area LO. We used fMRI to investigate this, and found regions in the left fusiform gyrus, left inferior frontal gyrus, and left middle temporal cortex that responded selectively to words. A group of healthy control subjects showed similar activations. The left fusiform activations appear to coincide with the area commonly named the visual word form area (VWFA) in studies of healthy individuals, and appear to be quite separate from the fusiform face area (FFA). We hypothesize that there is a route to this area that lies outside area LO, and which remains relatively unscathed in D.F. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  15. Neuromagnetic correlates of audiovisual word processing in the developing brain.

    Science.gov (United States)

    Dinga, Samantha; Wu, Di; Huang, Shuyang; Wu, Caiyun; Wang, Xiaoshan; Shi, Jingping; Hu, Yue; Liang, Chun; Zhang, Fawen; Lu, Meng; Leiken, Kimberly; Xiang, Jing

    2018-06-01

    The brain undergoes enormous changes during childhood. Little is known about how the brain develops to serve word processing. The objective of the present study was to investigate the maturational changes of word processing in children and adolescents using magnetoencephalography (MEG). Responses to a word processing task were investigated in sixty healthy participants. Each participant was presented with simultaneous visual and auditory word pairs in "match" and "mismatch" conditions. The patterns of neuromagnetic activation from MEG recordings were analyzed at both sensor and source levels. Topography and source imaging revealed that word processing transitioned from bilateral connections to unilateral connections as age increased from 6 to 17 years old. Correlation analyses of language networks revealed that the path length of word processing networks negatively correlated with age (r = -0.833, p processing networks were positively correlated with age. In addition, males had more visual connections, whereas females had more auditory connections. The correlations between gender and path length, gender and connection strength, and gender and clustering coefficient demonstrated a developmental trend without reaching statistical significance. The results indicate that the developmental trajectory of word processing is gender specific. Since the neuromagnetic signatures of these gender-specific paths to adult word processing were determined using non-invasive, objective, and quantitative methods, the results may play a key role in understanding language impairments in pediatric patients in the future. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.

    Science.gov (United States)

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-02-12

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.

  17. Toddlers' language-mediated visual search: they need not have the words for it

    NARCIS (Netherlands)

    Johnson, E.K.; McQueen, J.M.; Hüttig, F.

    2011-01-01

    Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched

  18. Adult Word Recognition and Visual Sequential Memory

    Science.gov (United States)

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  19. Neural correlates of visualizations of concrete and abstract words in preschool children: A developmental embodied approach

    Directory of Open Access Journals (Sweden)

    Amedeo eD'angiulli

    2015-06-01

    Full Text Available The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization, followed by a four-picture array (a target plus three distractors (part 2: matching visualization. Children were to select the picture matching the word they heard in part 1. Event-Related Potentials (ERPs locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e. < 300 ms was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e. 300-699 ms and late (i.e. 700-1000 ms ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a post-anterior pathway sequence: occipital, parietal and temporal areas; conversely, matching visualization involved left-hemispheric activity following an ant-posterior pathway sequence: frontal, temporal, parietal and occipital areas. These results suggest that, similarly for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying

  20. Hemispheric Lateralization in Processing Emotional and Non-Emotional Kanji Words

    OpenAIRE

    NAGAE, Seiji

    2013-01-01

    The purpose of this study was to investigate the contribution of both hemispheres to the processing of positive, negative, and non-emotional Kanji words in normal individuals. Right-handed subjects were asked to read aloud the Kanji word presented in the visual half-field. Results showed that responses to positive and non-emotional words were more accurate in RVF than those in LVF, but no difference was found fornegative emotional words. Reaction time results indicated that processing of nega...

  1. The what, when, where, and how of visual word recognition.

    Science.gov (United States)

    Carreiras, Manuel; Armstrong, Blair C; Perea, Manuel; Frost, Ram

    2014-02-01

    A long-standing debate in reading research is whether printed words are perceived in a feedforward manner on the basis of orthographic information, with other representations such as semantics and phonology activated subsequently, or whether the system is fully interactive and feedback from these representations shapes early visual word recognition. We review recent evidence from behavioral, functional magnetic resonance imaging, electroencephalography, magnetoencephalography, and biologically plausible connectionist modeling approaches, focusing on how each approach provides insight into the temporal flow of information in the lexical system. We conclude that, consistent with interactive accounts, higher-order linguistic representations modulate early orthographic processing. We also discuss how biologically plausible interactive frameworks and coordinated empirical and computational work can advance theories of visual word recognition and other domains (e.g., object recognition). Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Learning during processing Word learning doesn’t wait for word recognition to finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  3. Visual processing in pure alexia

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Habekost, Thomas; Gerlach, Christian

    2010-01-01

    affected. His visual apprehension span was markedly reduced for letters and digits. His reduced visual processing capacity was also evident when reporting letters from words. In an object decision task with fragmented pictures, NN's performance was abnormal. Thus, even in a pure alexic patient with intact...

  4. The Influence of Semantic Neighbours on Visual Word Recognition

    Science.gov (United States)

    Yates, Mark

    2012-01-01

    Although it is assumed that semantics is a critical component of visual word recognition, there is still much that we do not understand. One recent way of studying semantic processing has been in terms of semantic neighbourhood (SN) density, and this research has shown that semantic neighbours facilitate lexical decisions. However, it is not clear…

  5. Training by visual identification and writing leads to different visual word expertise N170 effects in preliterate Chinese children

    Directory of Open Access Journals (Sweden)

    Pei Zhao

    2015-10-01

    Full Text Available The N170 component of EEG evoked by visual words is an index of perceptual expertise for the visual word across different writing systems. In the present study, we investigated whether these N170 markers for Chinese, a very complex script, could emerge quickly after short-term learning (∼100 min in young Chinese children, and whether early writing experience can enhance the acquisition of these neural markers for expertise. Two groups of preschool children received visual identification and free writing training respectively. Short-term character training resulted in selective enhancement of the N170 to characters, consistent with normal expert processing. Visual identification training resulted in increased N170 amplitude to characters in the right hemisphere, and N170 amplitude differences between characters and faces were decreased; whereas the amplitude difference between characters and tools increased. Writing training led to the disappearance of an initial amplitude difference between characters and faces in the right hemisphere. These results show that N170 markers for visual expertise emerge rapidly in young children after word learning, independent of the type of script young children learn; and visual identification and writing produce different effects.

  6. Visual Similarity of Words Alone Can Modulate Hemispheric Lateralization in Visual Word Recognition: Evidence From Modeling Chinese Character Recognition.

    Science.gov (United States)

    Hsiao, Janet H; Cheung, Kit

    2016-03-01

    In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing. Nevertheless, it remains unclear whether this is due to phonetic radical position or character type frequency. Through computational modeling with artificial lexicons, in which we implement a theory of hemispheric asymmetry in perception but do not assume phonological processing being LH lateralized, we show that the difference in character type frequency alone is sufficient to exhibit the effect that the dominant type has a stronger LH lateralization than the minority type. This effect is due to higher visual similarity among characters in the dominant type than the minority type, demonstrating the modulation of visual similarity of words on hemispheric lateralization. Copyright © 2015 Cognitive Science Society, Inc.

  7. Effects of auditory and visual modalities in recall of words.

    Science.gov (United States)

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  8. Dynamic spatial organization of the occipito-temporal word form area for second language processing.

    Science.gov (United States)

    Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li

    2017-08-01

    Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017

  9. Encoding in the visual word form area: an fMRI adaptation study of words versus handwriting.

    Science.gov (United States)

    Barton, Jason J S; Fox, Christopher J; Sekunova, Alla; Iaria, Giuseppe

    2010-08-01

    Written texts are not just words but complex multidimensional stimuli, including aspects such as case, font, and handwriting style, for example. Neuropsychological reports suggest that left fusiform lesions can impair the reading of text for word (lexical) content, being associated with alexia, whereas right-sided lesions may impair handwriting recognition. We used fMRI adaptation in 13 healthy participants to determine if repetition-suppression occurred for words but not handwriting in the left visual word form area (VWFA) and the reverse in the right fusiform gyrus. Contrary to these expectations, we found adaptation for handwriting but not for words in both the left VWFA and the right VWFA homologue. A trend to adaptation for words but not handwriting was seen only in the left middle temporal gyrus. An analysis of anterior and posterior subdivisions of the left VWFA also failed to show any adaptation for words. We conclude that the right and the left fusiform gyri show similar patterns of adaptation for handwriting, consistent with a predominantly perceptual contribution to text processing.

  10. Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode

    Science.gov (United States)

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976

  11. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    Directory of Open Access Journals (Sweden)

    Laura Barca

    Full Text Available Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  12. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    Science.gov (United States)

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  13. Effect of word familiarity on visually evoked magnetic fields.

    Science.gov (United States)

    Harada, N; Iwaki, S; Nakagawa, S; Yamaguchi, M; Tonoike, M

    2004-11-30

    This study investigated the effect of word familiarity of visual stimuli on the word recognizing function of the human brain. Word familiarity is an index of the relative ease of word perception, and is characterized by facilitation and accuracy on word recognition. We studied the effect of word familiarity, using "Hiragana" (phonetic characters in Japanese orthography) characters as visual stimuli, on the elicitation of visually evoked magnetic fields with a word-naming task. The words were selected from a database of lexical properties of Japanese. The four "Hiragana" characters used were grouped and presented in 4 classes of degree of familiarity. The three components were observed in averaged waveforms of the root mean square (RMS) value on latencies at about 100 ms, 150 ms and 220 ms. The RMS value of the 220 ms component showed a significant positive correlation (F=(3/36); 5.501; p=0.035) with the value of familiarity. ECDs of the 220 ms component were observed in the intraparietal sulcus (IPS). Increments in the RMS value of the 220 ms component, which might reflect ideographical word recognition, retrieving "as a whole" were enhanced with increments of the value of familiarity. The interaction of characters, which increased with the value of familiarity, might function "as a large symbol"; and enhance a "pop-out" function with an escaping character inhibiting other characters and enhancing the segmentation of the character (as a figure) from the ground.

  14. Cultural constraints on brain development: evidence from a developmental study of visual word processing in mandarin chinese.

    Science.gov (United States)

    Cao, Fan; Lee, Rebecca; Shu, Hua; Yang, Yanhui; Xu, Guoqing; Li, Kuncheng; Booth, James R

    2010-05-01

    Developmental differences in phonological and orthographic processing in Chinese were examined in 9 year olds, 11 year olds, and adults using functional magnetic resonance imaging. Rhyming and spelling judgments were made to 2-character words presented sequentially in the visual modality. The spelling task showed greater activation than the rhyming task in right superior parietal lobule and right inferior temporal gyrus, and there were developmental increases across tasks bilaterally in these regions in addition to bilateral occipital cortex, suggesting increased involvement over age on visuo-orthographic analysis. The rhyming task showed greater activation than the spelling task in left superior temporal gyrus and there were developmental decreases across tasks in this region, suggesting reduced involvement over age on phonological representations. The rhyming and spelling tasks included words with conflicting orthographic and phonological information (i.e., rhyming words spelled differently or nonrhyming words spelled similarly) or nonconflicting information. There was a developmental increase in the difference between conflicting and nonconflicting words in left inferior parietal lobule, suggesting greater engagement of systems for mapping between orthographic and phonological representations. Finally, there were developmental increases across tasks in an anterior (Broadman area [BA] 45, 46) and posterior (BA 9) left inferior frontal gyrus, suggesting greater reliance on controlled retrieval and selection of posterior lexical representations.

  15. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  16. Emotion Word Processing: Effects of Word Type and Valence in Spanish-English Bilinguals

    Science.gov (United States)

    Kazanas, Stephanie A.; Altarriba, Jeanette

    2016-01-01

    Previous studies comparing emotion and emotion-laden word processing have used various cognitive tasks, including an Affective Simon Task (Altarriba and Basnight-Brown in "Int J Billing" 15(3):310-328, 2011), lexical decision task (LDT; Kazanas and Altarriba in "Am J Psychol", in press), and rapid serial visual processing…

  17. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  18. Words Do Come Easy (Sometimes)

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe Allerup

    multiple stimuli are presented simultaneously: Are words treated as units or wholes in visual short term memory? Using methods based on a Theory of Visual Attention (TVA), we measured perceptual threshold, visual processing speed and visual short term memory capacity for words and letters, in two simple...... a different pattern: Letters are perceived more easily than words, and this is reflected both in perceptual processing speed and short term memory capacity. So even if single words do come easy, they seem to enjoy no advantage in visual short term memory....

  19. Bedding down new words: Sleep promotes the emergence of lexical competition in visual word recognition.

    Science.gov (United States)

    Wang, Hua-Chen; Savage, Greg; Gaskell, M Gareth; Paulin, Tamara; Robidoux, Serje; Castles, Anne

    2017-08-01

    Lexical competition processes are widely viewed as the hallmark of visual word recognition, but little is known about the factors that promote their emergence. This study examined for the first time whether sleep may play a role in inducing these effects. A group of 27 participants learned novel written words, such as banara, at 8 am and were tested on their learning at 8 pm the same day (AM group), while 29 participants learned the words at 8 pm and were tested at 8 am the following day (PM group). Both groups were retested after 24 hours. Using a semantic categorization task, we showed that lexical competition effects, as indexed by slowed responses to existing neighbor words such as banana, emerged 12 h later in the PM group who had slept after learning but not in the AM group. After 24 h the competition effects were evident in both groups. These findings have important implications for theories of orthographic learning and broader neurobiological models of memory consolidation.

  20. The role of syllabic structure in French visual word recognition.

    Science.gov (United States)

    Rouibah, A; Taft, M

    2001-03-01

    Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.

  1. Functions of graphemic and phonemic codes in visual word-recognition.

    Science.gov (United States)

    Meyer, D E; Schvaneveldt, R W; Ruddy, M G

    1974-03-01

    Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.

  2. Word and face processing engage overlapping distributed networks: Evidence from RSVP and EEG investigations.

    Science.gov (United States)

    Robinson, Amanda K; Plaut, David C; Behrmann, Marlene

    2017-07-01

    Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed; that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Concreteness in Word Processing: ERP and Behavioral Effects in a Lexical Decision Task

    Science.gov (United States)

    Barber, Horacio A.; Otten, Leun J.; Kousta, Stavroula-Thaleia; Vigliocco, Gabriella

    2013-01-01

    Relative to abstract words, concrete words typically elicit faster response times and larger N400 and N700 event-related potential (ERP) brain responses. These effects have been interpreted as reflecting the denser links to associated semantic information of concrete words and their recruitment of visual imagery processes. Here, we examined…

  4. Visual word representation in the brain

    NARCIS (Netherlands)

    Ramakrishnan, K.; Groen, I.; Scholte, S.; Smeulders, A.; Ghebreab, S.

    2013-01-01

    The human visual system is thought to use features of intermediate complexity for scene representation. How the brain computationally represents intermediate features is unclear, however. To study this, we tested the Bag of Words (BoW) model in computer vision against human brain activity. This

  5. Hemispheric asymmetry in the processing of negative and positive words: a divided field study.

    Science.gov (United States)

    Holtgraves, Thomas; Felton, Adam

    2011-06-01

    Research on the lateralisation of brain functions for emotion has yielded different results as a function of whether it is the experience, expression, or perceptual processing of emotion that is examined. Further, for the perception of emotion there appear to be differences between the processing of verbal and nonverbal stimuli. The present research examined the hemispheric asymmetry in the processing of verbal stimuli varying in emotional valence. Participants performed a lexical decision task for words varying in affective valence (but equated in terms of arousal) that were presented briefly to the right or left visual field. Participants were significantly faster at recognising positive words presented to the right visual field/left hemisphere. This pattern did not occur for negative words (and was reversed for high arousal negative words). These results suggest that the processing of verbal stimuli varying in emotional valence tends to parallel hemispheric asymmetry in the experience of emotion.

  6. Survival Processing Enhances Visual Search Efficiency.

    Science.gov (United States)

    Cho, Kit W

    2018-05-01

    Words rated for their survival relevance are remembered better than when rated using other well-known memory mnemonics. This finding, which is known as the survival advantage effect and has been replicated in many studies, suggests that our memory systems are molded by natural selection pressures. In two experiments, the present study used a visual search task to examine whether there is likewise a survival advantage for our visual systems. Participants rated words for their survival relevance or for their pleasantness before locating that object's picture in a search array with 8 or 16 objects. Although there was no difference in search times among the two rating scenarios when set size was 8, survival processing reduced visual search times when set size was 16. These findings reflect a search efficiency effect and suggest that similar to our memory systems, our visual systems are also tuned toward self-preservation.

  7. Brain regions activated by the passive processing of visually- and auditorily-presented words measured by averaged PET images of blood flow change

    International Nuclear Information System (INIS)

    Peterson, S.E.; Fox, P.T.; Posner, M.I.; Raichle, M.E.

    1987-01-01

    A limited number of regions specific to input modality are activated by the auditory and visual presentation of single words. These regions include primary auditory and visual cortex, and modality-specific higher order region that may be performing computations at a word level of analysis

  8. Imagining the truth and the moon: an electrophysiological study of abstract and concrete word processing.

    Science.gov (United States)

    Gullick, Margaret M; Mitra, Priya; Coch, Donna

    2013-05-01

    Previous event-related potential studies have indicated that both a widespread N400 and an anterior N700 index differential processing of concrete and abstract words, but the nature of these components in relation to concreteness and imagery has been unclear. Here, we separated the effects of word concreteness and task demands on the N400 and N700 in a single word processing paradigm with a within-subjects, between-tasks design and carefully controlled word stimuli. The N400 was larger to concrete words than to abstract words, and larger in the visualization task condition than in the surface task condition, with no interaction. A marked anterior N700 was elicited only by concrete words in the visualization task condition, suggesting that this component indexes imagery. These findings are consistent with a revised or extended dual coding theory according to which concrete words benefit from greater activation in both verbal and imagistic systems. Copyright © 2013 Society for Psychophysiological Research.

  9. Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition.

    Science.gov (United States)

    Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia

    2015-09-01

    We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.

  10. Emotion word processing: does mood make a difference?

    Science.gov (United States)

    Sereno, Sara C.; Scott, Graham G.; Yao, Bo; Thaden, Elske J.; O'Donnell, Patrick J.

    2015-01-01

    Visual emotion word processing has been in the focus of recent psycholinguistic research. In general, emotion words provoke differential responses in comparison to neutral words. However, words are typically processed within a context rather than in isolation. For instance, how does one's inner emotional state influence the comprehension of emotion words? To address this question, the current study examined lexical decision responses to emotionally positive, negative, and neutral words as a function of induced mood as well as their word frequency. Mood was manipulated by exposing participants to different types of music. Participants were randomly assigned to one of three conditions—no music, positive music, and negative music. Participants' moods were assessed during the experiment to confirm the mood induction manipulation. Reaction time results confirmed prior demonstrations of an interaction between a word's emotionality and its frequency. Results also showed a significant interaction between participant mood and word emotionality. However, the pattern of results was not consistent with mood-congruency effects. Although positive and negative mood facilitated responses overall in comparison to the control group, neither positive nor negative mood appeared to additionally facilitate responses to mood-congruent words. Instead, the pattern of findings seemed to be the consequence of attentional effects arising from induced mood. Positive mood broadens attention to a global level, eliminating the category distinction of positive-negative valence but leaving the high-low arousal dimension intact. In contrast, negative mood narrows attention to a local level, enhancing within-category distinctions, in particular, for negative words, resulting in less effective facilitation. PMID:26379570

  11. Does Top-Down Feedback Modulate the Encoding of Orthographic Representations During Visual-Word Recognition?

    Science.gov (United States)

    Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta

    2016-09-01

    In masked priming lexical decision experiments, there is a matched-case identity advantage for nonwords, but not for words (e.g., ERTAR-ERTAR words when top-down feedback is minimized. We employed a task that taps prelexical orthographic processes: the masked prime same-different task. For "same" trials, results showed faster response times for targets when preceded by a briefly presented matched-case identity prime than when preceded by a mismatched-case identity prime. Importantly, this advantage was similar in magnitude for nonwords and words. This finding constrains the interplay of bottom-up versus top-down mechanisms in models of visual-word identification.

  12. Comparison of the neural substrates mediating the semantic processing of Korean and English words using positron emission tomography

    International Nuclear Information System (INIS)

    Kim, Jea Jin; Kim, Myung Sun; Cho, Sang Soo; Kwon, Jun Soo; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Lee, Myung Chul

    2001-01-01

    This study was performed to search the relatively specific brain regions related to the semantic processing of Korean and English words on the one hand and the regions common to both on the other. Regional cerebral blood flow associated with different semantic tasks was examined using ( 15 O)H 2 O positron emission tomography in 13 healthy volunteers. The tasks consisted of semantic tasks for Korean words, semantic tasks for English words and control tasks using simple pictures. The regions specific and common to each language were identified by the relevant subtraction analysis using statistical parametric mapping. Common to the semantic processing of both words, the activation site was observed in the fusiform gyrus, particularly the left side. In addition, activation of the left inferior temporal gyrus was found only in the semantic processing of English words. The regions specific to Korean words were observed in multiple areas, including the right primary auditory cortex; whereas the regions specific to English words were limited to the right posterior visual area. Internal phonological process is engaged in performing the visual semantic task for Korean words of the high proficiency, whereas visual scanning plays an important role in performing the task for English words of the low proficiency

  13. Don’t words come easy?A psychophysical exploration of word superiority

    Directory of Open Access Journals (Sweden)

    Randi eStarrfelt

    2013-09-01

    Full Text Available Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE has been observed when written stimuli are presented very briefly or degraded by visual noise. We compare performance with letters and words in three experiments, to explore the extents and limits of the WSE. Using a carefully controlled list of three letter words, we show that a word superiority effect can be revealed in vocal reaction times even to undegraded stimuli. With a novel combination of psychophysics and mathematical modelling, we further show that the typical WSE is specifically reflected in perceptual processing speed: single words are simply processed faster than single letters. Intriguingly, when multiple stimuli are presented simultaneously, letters are perceived more easily than words, and this is reflected both in perceptual processing speed and visual short term memory capacity. So, even if single words come easy, there is a limit to the word superiority effect.

  14. Semantic word category processing in semantic dementia and posterior cortical atrophy.

    Science.gov (United States)

    Shebani, Zubaida; Patterson, Karalyn; Nestor, Peter J; Diaz-de-Grenu, Lara Z; Dawson, Kate; Pulvermüller, Friedemann

    2017-08-01

    There is general agreement that perisylvian language cortex plays a major role in lexical and semantic processing; but the contribution of additional, more widespread, brain areas in the processing of different semantic word categories remains controversial. We investigated word processing in two groups of patients whose neurodegenerative diseases preferentially affect specific parts of the brain, to determine whether their performance would vary as a function of semantic categories proposed to recruit those brain regions. Cohorts with (i) Semantic Dementia (SD), who have anterior temporal-lobe atrophy, and (ii) Posterior Cortical Atrophy (PCA), who have predominantly parieto-occipital atrophy, performed a lexical decision test on words from five different lexico-semantic categories: colour (e.g., yellow), form (oval), number (seven), spatial prepositions (under) and function words (also). Sets of pseudo-word foils matched the target words in length and bi-/tri-gram frequency. Word-frequency was matched between the two visual word categories (colour and form) and across the three other categories (number, prepositions, and function words). Age-matched healthy individuals served as controls. Although broad word processing deficits were apparent in both patient groups, the deficit was strongest for colour words in SD and for spatial prepositions in PCA. The patterns of performance on the lexical decision task demonstrate (a) general lexicosemantic processing deficits in both groups, though more prominent in SD than in PCA, and (b) differential involvement of anterior-temporal and posterior-parietal cortex in the processing of specific semantic categories of words. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Auditory attention enhances processing of positive and negative words in inferior and superior prefrontal cortex.

    Science.gov (United States)

    Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna

    2017-11-01

    Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Deep generative learning of location-invariant visual word recognition.

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective

  17. Time course of syllabic and sub-syllabic processing in Mandarin word production: Evidence from the picture-word interference paradigm.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2017-06-05

    The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.

  18. Teaching the Meaning of Words to Children with Visual Impairments

    Science.gov (United States)

    Vervloed, Mathijs P. J.; Loijens, Nancy E. A.; Waller, Sarah E.

    2014-01-01

    In the report presented here, the authors describe a pilot intervention study that was intended to teach children with visual impairments the meaning of far-away words, and that used their mothers as mediators. The aim was to teach both labels and deep word knowledge, which is the comprehension of the full meaning of words, illustrated through…

  19. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    Science.gov (United States)

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  20. Deep generative learning of location-invariant visual word recognition

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the model's learning objective

  1. Deep generative learning of location-invariant visual word recognition

    Directory of Open Access Journals (Sweden)

    Maria Grazia eDi Bono

    2013-09-01

    Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.

  2. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    OpenAIRE

    Jesse, A.; McQueen, J.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker...

  3. Music and words in the visual cortex: The impact of musical expertise.

    Science.gov (United States)

    Mongelli, Valeria; Dehaene, Stanislas; Vinckier, Fabien; Peretz, Isabelle; Bartolomeo, Paolo; Cohen, Laurent

    2017-01-01

    How does the human visual system accommodate expertise for two simultaneously acquired symbolic systems? We used fMRI to compare activations induced in the visual cortex by musical notation, written words and other classes of objects, in professional musicians and in musically naïve controls. First, irrespective of expertise, selective activations for music were posterior and lateral to activations for words in the left occipitotemporal cortex. This indicates that symbols characterized by different visual features engage distinct cortical areas. Second, musical expertise increased the volume of activations for music and led to an anterolateral displacement of word-related activations. In musicians, there was also a dramatic increase of the brain-scale networks connected to the music-selective visual areas. Those findings reveal that acquiring a double visual expertise involves an expansion of category-selective areas, the development of novel long-distance functional connectivity, and possibly some competition between categories for the colonization of cortical space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Effect of continuous positive airway pressure treatment for obstructive sleep apnoea on visual processing of degraded words.

    Science.gov (United States)

    Proudlove, Katie; Manuel, Ari; Hall, Rachel; Rieu, Romelie; Villarroel, Mauricio; Stradling, John

    2014-01-01

    In a previous uncontrolled study, continuous positive airway pressure (CPAP) therapy for obstructive sleep apnoea (OSA) improved vision in patients with diabetic macular oedema. We investigated whether the above improvement in vision (or visual processing) might have been due to reduced sleepiness, rather than a true improvement in retinal function. Twelve normal control subjects and 20 patients with OSA were tested for their ability to recognise degraded words, by means of a computer programme displaying 5-letter words every 4 s for 10 min, with variable amounts of the bottom half of the word missing; the percentage of the word necessary to achieve correct identification on average half the time was 'hunted' (the test score). All subjects were tested twice, 2-3 weeks apart; the OSA group after the commencement of CPAP. The Epworth Sleepiness Score (ESS) in patients was measured at the same visit. The test score at visit 1 was 26.7% for normal subjects and 31.6% for patients with OSA. At visit 2, the test score was 25.0% for normal subjects and 29.9% for patients with OSA. The groups showed a small and identical improvement over the trial period in the test score, of 1.7% (p = 0.01 and p = 0.03 for the normal and OSA groups, respectively). The group with OSA experienced a drop in ESS of 7.5 (SD 5.5) points following treatment. The small and identical improvement in both groups suggests only a similar learning effect rather than any improvement due to reduced sleepiness.

  5. Crossmodal Activation of Visual Object Regions for Auditorily Presented Concrete Words

    Directory of Open Access Journals (Sweden)

    Jasper J F van den Bosch

    2011-10-01

    Full Text Available Dual-coding theory (Paivio, 1986 postulates that the human mind represents objects not just with an analogous, or semantic code, but with a perceptual representation as well. Previous studies (eg, Fiebach & Friederici, 2004 indicated that the modality of this representation is not necessarily the one that triggers the representation. The human visual cortex contains several regions, such as the Lateral Occipital Complex (LOC, that respond specifically to object stimuli. To investigate whether these principally visual representations regions are also recruited for auditory stimuli, we presented subjects with spoken words with specific, concrete meanings (‘car’ as well as words with abstract meanings (‘hope’. Their brain activity was measured with functional magnetic resonance imaging. Whole-brain contrasts showed overlap between regions differentially activated by words for concrete objects compared to words for abstract concepts with visual regions activated by a contrast of object versus non-object visual stimuli. We functionally localized LOC for individual subjects and a preliminary analysis showed a trend for a concreteness effect in this region-of-interest on the group level. Appropriate further analysis might include connectivity and classification measures. These results can shed light on the role of crossmodal representations in cognition.

  6. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  7. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    Science.gov (United States)

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  8. The Lexical Status of the Root in Processing Morphologically Complex Words in Arabic

    Science.gov (United States)

    Shalhoub-Awwad, Yasmin; Leikin, Mark

    2016-01-01

    This study investigated the effects of the Arabic root in the visual word recognition process among young readers in order to explore its role in reading acquisition and its development within the structure of the Arabic mental lexicon. We examined cross-modal priming of words that were derived from the same root of the target…

  9. Large-scale functional networks connect differently for processing words and symbol strings.

    Science.gov (United States)

    Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta

    2018-01-01

    Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.

  10. The role of tone and segmental information in visual-word recognition in Thai.

    Science.gov (United States)

    Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira

    2017-07-01

    Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.

  11. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    Science.gov (United States)

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  12. Role of syllable segmentation processes in peripheral word recognition.

    Science.gov (United States)

    Bernard, Jean-Baptiste; Calabrèse, Aurélie; Castet, Eric

    2014-12-01

    Previous studies of foveal visual word recognition provide evidence for a low-level syllable decomposition mechanism occurring during the recognition of a word. We investigated if such a decomposition mechanism also exists in peripheral word recognition. Single words were visually presented to subjects in the peripheral field using a 6° square gaze-contingent simulated central scotoma. In the first experiment, words were either unicolor or had their adjacent syllables segmented with two different colors (color/syllable congruent condition). Reaction times for correct word identification were measured for the two different conditions and for two different print sizes. Results show a significant decrease in reaction time for the color/syllable congruent condition compared with the unicolor condition. A second experiment suggests that this effect is specific to syllable decomposition and results from strategic, presumably involving attentional factors, rather than stimulus-driven control.

  13. Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience.

    Science.gov (United States)

    Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir

    2016-03-01

    Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider

  14. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2013-08-01

    Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.

  15. Development of Embodied Word Meanings: Sensorimotor Effects in Children's Lexical Processing.

    Science.gov (United States)

    Inkster, Michelle; Wellsby, Michele; Lloyd, Ellen; Pexman, Penny M

    2016-01-01

    Previous research showed an effect of words' rated body-object interaction (BOI) in children's visual word naming performance, but only in children 8 years of age or older (Wellsby and Pexman, 2014a). In that study, however, BOI was established using adult ratings. Here we collected ratings from a group of parents for children's BOI experience (child-BOI). We examined effects of words' child-BOI and also words' imageability on children's responses in an auditory word naming task, which is suited to the lexical processing skills of younger children. We tested a group of 54 children aged 6-7 years and a comparison group of 25 adults. Results showed significant effects of both imageability and child-BOI on children's auditory naming latencies. These results provide evidence that children younger than 8 years of age have richer semantic representations for high imageability and high child-BOI words, consistent with an embodied account of word meaning.

  16. Implicit and explicit attention to pictures and words: An fMRI-study of concurrent emotional stimulus processing

    Directory of Open Access Journals (Sweden)

    Tobias eFlaisch

    2015-12-01

    Full Text Available The present study utilized functional magnetic resonance imaging (fMRI to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.

  17. Implicit and Explicit Attention to Pictures and Words: An fMRI-Study of Concurrent Emotional Stimulus Processing.

    Science.gov (United States)

    Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T

    2015-01-01

    The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.

  18. Implicit and Explicit Attention to Pictures and Words: An fMRI-Study of Concurrent Emotional Stimulus Processing

    Science.gov (United States)

    Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T.

    2015-01-01

    The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms. PMID:26733895

  19. Hemispheric asymmetry of emotion words in a non-native mind: a divided visual field study.

    Science.gov (United States)

    Jończyk, Rafał

    2015-05-01

    This study investigates hemispheric specialization for emotional words among proficient non-native speakers of English by means of the divided visual field paradigm. The motivation behind the study is to extend the monolingual hemifield research to the non-native context and see how emotion words are processed in a non-native mind. Sixty eight females participated in the study, all highly proficient in English. The stimuli comprised 12 positive nouns, 12 negative nouns, 12 non-emotional nouns and 36 pseudo-words. To examine the lateralization of emotion, stimuli were presented unilaterally in a random fashion for 180 ms in a go/no-go lexical decision task. The perceptual data showed a right hemispheric advantage for processing speed of negative words and a complementary role of the two hemispheres in the recognition accuracy of experimental stimuli. The data indicate that processing of emotion words in non-native language may require greater interhemispheric communication, but at the same time demonstrates a specific role of the right hemisphere in the processing of negative relative to positive valence. The results of the study are discussed in light of the methodological inconsistencies in the hemifield research as well as the non-native context in which the study was conducted.

  20. Artful terms: A study on aesthetic word usage for visual art versus film and music

    Science.gov (United States)

    Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan

    2012-01-01

    Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica 139 187–201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms. PMID:23145287

  1. Artful terms: A study on aesthetic word usage for visual art versus film and music.

    Science.gov (United States)

    Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan

    2012-01-01

    Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica139 187-201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms.

  2. Artful Terms: A Study on Aesthetic Word Usage for Visual Art versus Film and Music

    Directory of Open Access Journals (Sweden)

    M Dorothee Augustin

    2012-06-01

    Full Text Available Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica 139 187–201. Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms.

  3. Selectivity of N170 for visual words in the right hemisphere: Evidence from single-trial analysis.

    Science.gov (United States)

    Yang, Hang; Zhao, Jing; Gaspar, Carl M; Chen, Wei; Tan, Yufei; Weng, Xuchu

    2017-08-01

    Neuroimaging and neuropsychological studies have identified the involvement of the right posterior region in the processing of visual words. Interestingly, in contrast, ERP studies of the N170 typically demonstrate selectivity for words more strikingly over the left hemisphere. Why is right hemisphere selectivity for words during the N170 epoch typically not observed, despite the clear involvement of this region in word processing? One possibility is that amplitude differences measured on averaged ERPs in previous studies may have been obscured by variation in peak latency across trials. This study examined this possibility by using single-trial analysis. Results show that words evoked greater single-trial N170s than control stimuli in the right hemisphere. Additionally, we observed larger trial-to-trial variability on N170 peak latency for words as compared to control stimuli over the right hemisphere. Results demonstrate that, in contrast to much of the prior literature, the N170 can be selective to words over the right hemisphere. This discrepancy is explained in terms of variability in trial-to-trial peak latency for responses to words over the right hemisphere. © 2017 Society for Psychophysiological Research.

  4. How Many Words Is a Picture Worth? Integrating Visual Literacy in Language Learning with Photographs

    Science.gov (United States)

    Baker, Lottie

    2015-01-01

    Cognitive research has shown that the human brain processes images quicker than it processes words, and images are more likely than text to remain in long-term memory. With the expansion of technology that allows people from all walks of life to create and share photographs with a few clicks, the world seems to value visual media more than ever…

  5. Neural Correlates of Word Recognition: A Systematic Comparison of Natural Reading and Rapid Serial Visual Presentation.

    Science.gov (United States)

    Kornrumpf, Benthe; Niefind, Florian; Sommer, Werner; Dimigen, Olaf

    2016-09-01

    Neural correlates of word recognition are commonly studied with (rapid) serial visual presentation (RSVP), a condition that eliminates three fundamental properties of natural reading: parafoveal preprocessing, saccade execution, and the fast changes in attentional processing load occurring from fixation to fixation. We combined eye-tracking and EEG to systematically investigate the impact of all three factors on brain-electric activity during reading. Participants read lists of words either actively with eye movements (eliciting fixation-related potentials) or maintained fixation while the text moved passively through foveal vision at a matched pace (RSVP-with-flankers paradigm, eliciting ERPs). The preview of the upcoming word was manipulated by changing the number of parafoveally visible letters. Processing load was varied by presenting words of varying lexical frequency. We found that all three factors have strong interactive effects on the brain's responses to words: Once a word was fixated, occipitotemporal N1 amplitude decreased monotonically with the amount of parafoveal information available during the preceding fixation; hence, the N1 component was markedly attenuated under reading conditions with preview. Importantly, this preview effect was substantially larger during active reading (with saccades) than during passive RSVP with flankers, suggesting that the execution of eye movements facilitates word recognition by increasing parafoveal preprocessing. Lastly, we found that the N1 component elicited by a word also reflects the lexical processing load imposed by the previously inspected word. Together, these results demonstrate that, under more natural conditions, words are recognized in a spatiotemporally distributed and interdependent manner across multiple eye fixations, a process that is mediated by active motor behavior.

  6. The Pattern Recognition in Cattle Brand using Bag of Visual Words and Support Vector Machines Multi-Class

    Directory of Open Access Journals (Sweden)

    Carlos Silva, Mr

    2018-03-01

    Full Text Available The recognition images of cattle brand in an automatic way is a necessity to governmental organs responsible for this activity. To help this process, this work presents a method that consists in using Bag of Visual Words for extracting of characteristics from images of cattle brand and Support Vector Machines Multi-Class for classification. This method consists of six stages: a select database of images; b extract points of interest (SURF; c create vocabulary (K-means; d create vector of image characteristics (visual words; e train and sort images (SVM; f evaluate the classification results. The accuracy of the method was tested on database of municipal city hall, where it achieved satisfactory results, reporting 86.02% of accuracy and 56.705 seconds of processing time, respectively.

  7. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink.

    Science.gov (United States)

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading.

  8. The role of visual processing speed in reading speed development.

    Science.gov (United States)

    Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane

    2013-01-01

    A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.

  9. The role of visual processing speed in reading speed development.

    Directory of Open Access Journals (Sweden)

    Muriel Lobier

    Full Text Available A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span, predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.

  10. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    Directory of Open Access Journals (Sweden)

    Nouman Ali

    Full Text Available With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR, high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT and Speeded-Up Robust Features (SURF. The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.

  11. Recall of short word lists presented visually at fast rates: effects of phonological similarity and word length.

    Science.gov (United States)

    Coltheart, V; Langdon, R

    1998-03-01

    Phonological similarity of visually presented list items impairs short-term serial recall. Lists of long words are also recalled less accurately than are lists of short words. These results have been attributed to phonological recoding and rehearsal. If subjects articulate irrelevant words during list presentation, both phonological similarity and word length effects are abolished. Experiments 1 and 2 examined effects of phonological similarity and recall instructions on recall of lists shown at fast rates (from one item per 0.114-0.50 sec), which might not permit phonological encoding and rehearsal. In Experiment 3, recall instructions and word length were manipulated using fast presentation rates. Both phonological similarity and word length effects were observed, and they were not dependent on recall instructions. Experiments 4 and 5 investigated the effects of irrelevant concurrent articulation on lists shown at fast rates. Both phonological similarity and word length effects were removed by concurrent articulation, as they were with slow presentation rates.

  12. No deficiency in left-to-right processing of words in dyslexia but evidence for enhanced visual crowding

    NARCIS (Netherlands)

    Callens, Maaike; Whitney, Carol; Tops, Wim; Brysbaert, Marc

    2013-01-01

    Whitney and Cornelissen hypothesized that dyslexia may be the result of problems with the left-to-right processing of words, particularly in the part of the word between the word beginning and the reader's fixation position. To test this hypothesis, we tachistoscopically presented consonant trigrams

  13. Word attributes and lateralization revisited: implications for dual coding and discrete versus continuous processing.

    Science.gov (United States)

    Boles, D B

    1989-01-01

    Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.

  14. Words Versus Pictures: Leveraging the Research on Visual Communication

    Directory of Open Access Journals (Sweden)

    Pauline Dewan

    2015-06-01

    Full Text Available Librarians, like many other occupations, tend to rely on text and underutilize graphics. Research on visual communication shows that pictures have a number of advantages over words. We can interact more effectively with colleagues and patrons by incorporating ideas from this research.

  15. How a hobby can shape cognition: visual word recognition in competitive Scrabble players.

    Science.gov (United States)

    Hargreaves, Ian S; Pexman, Penny M; Zdrazilova, Lenka; Sargious, Peter

    2012-01-01

    Competitive Scrabble is an activity that involves extraordinary word recognition experience. We investigated whether that experience is associated with exceptional behavior in the laboratory in a classic visual word recognition paradigm: the lexical decision task (LDT). We used a version of the LDT that involved horizontal and vertical presentation and a concreteness manipulation. In Experiment 1, we presented this task to a group of undergraduates, as these participants are the typical sample in word recognition studies. In Experiment 2, we compared the performance of a group of competitive Scrabble players with a group of age-matched nonexpert control participants. The results of a series of cognitive assessments showed that the Scrabble players and control participants differed only in Scrabble-specific skills (e.g., anagramming). Scrabble expertise was associated with two specific effects (as compared to controls): vertical fluency (relatively less difficulty judging lexicality for words presented in the vertical orientation) and semantic deemphasis (smaller concreteness effects for word responses). These results suggest that visual word recognition is shaped by experience, and that with experience there are efficiencies to be had even in the adult word recognition system.

  16. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    Science.gov (United States)

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2012-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…

  17. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  18. Early processing of orthographic language membership information in bilingual visual word recognition: Evidence from ERPs.

    Science.gov (United States)

    Hoversten, Liv J; Brothers, Trevor; Swaab, Tamara Y; Traxler, Matthew J

    2017-08-01

    For successful language comprehension, bilinguals often must exert top-down control to access and select lexical representations within a single language. These control processes may critically depend on identification of the language to which a word belongs, but it is currently unclear when different sources of such language membership information become available during word recognition. In the present study, we used event-related potentials to investigate the time course of influence of orthographic language membership cues. Using an oddball detection paradigm, we observed early neural effects of orthographic bias (Spanish vs. English orthography) that preceded effects of lexicality (word vs. pseudoword). This early orthographic pop-out effect was observed for both words and pseudowords, suggesting that this cue is available prior to full lexical access. We discuss the role of orthographic bias for models of bilingual word recognition and its potential role in the suppression of nontarget lexical information. Published by Elsevier Ltd.

  19. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    NARCIS (Netherlands)

    Jesse, A.; McQueen, J.M.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes

  20. Recalling taboo and nontaboo words.

    Science.gov (United States)

    Jay, Timothy; Caldwell-Harris, Catherine; King, Krista

    2008-01-01

    People remember emotional and taboo words better than neutral words. It is well known that words that are processed at a deep (i.e., semantic) level are recalled better than words processed at a shallow (i.e., purely visual) level. To determine how depth of processing influences recall of emotional and taboo words, a levels of processing paradigm was used. Whether this effect holds for emotional and taboo words has not been previously investigated. Two experiments demonstrated that taboo and emotional words benefit less from deep processing than do neutral words. This is consistent with the proposal that memories for taboo and emotional words are a function of the arousal level they evoke, even under shallow encoding conditions. Recall was higher for taboo words, even when taboo words were cued to be recalled after neutral and emotional words. The superiority of taboo word recall is consistent with cognitive neuroscience and brain imaging research.

  1. Many Neighbors are not Silent. fMRI Evidence for Global Lexical Activity in Visual Word Recognition.

    Directory of Open Access Journals (Sweden)

    Mario eBraun

    2015-07-01

    Full Text Available Many neurocognitive studies investigated the neural correlates of visual word recognition, some of which manipulated the orthographic neighborhood density of words and nonwords believed to influence the activation of orthographically similar representations in a hypothetical mental lexicon. Previous neuroimaging research failed to find evidence for such global lexical activity associated with neighborhood density. Rather, effects were interpreted to reflect semantic or domain general processing. The present fMRI study revealed effects of lexicality, orthographic neighborhood density and a lexicality by orthographic neighborhood density interaction in a silent reading task. For the first time we found greater activity for words and nonwords with a high number of neighbors. We propose that this activity in the dorsomedial prefrontal cortex reflects activation of orthographically similar codes in verbal working memory thus providing evidence for global lexical activity as the basis of the neighborhood density effect. The interaction of lexicality by neighborhood density in the ventromedial prefrontal cortex showed lower activity in response to words with a high number compared to nonwords with a high number of neighbors. In the light of these results the facilitatory effect for words and inhibitory effect for nonwords with many neighbors observed in previous studies can be understood as being due to the operation of a fast-guess mechanism for words and a temporal deadline mechanism for nonwords as predicted by models of visual word recognition. Furthermore, we propose that the lexicality effect with higher activity for words compared to nonwords in inferior parietal and middle temporal cortex reflects the operation of an identification mechanism and based on local lexico-semantic activity.

  2. Resolving the locus of cAsE aLtErNaTiOn effects in visual word recognition: Evidence from masked priming.

    Science.gov (United States)

    Perea, Manuel; Vergara-Martínez, Marta; Gomez, Pablo

    2015-09-01

    Determining the factors that modulate the early access of abstract lexical representations is imperative for the formulation of a comprehensive neural account of visual-word identification. There is a current debate on whether the effects of case alternation (e.g., tRaIn vs. train) have an early or late locus in the word-processing stream. Here we report a lexical decision experiment using a technique that taps the early stages of visual-word recognition (i.e., masked priming). In the design, uppercase targets could be preceded by an identity/unrelated prime that could be in lowercase or alternating case (e.g., table-TABLE vs. crash-TABLE; tAbLe-TABLE vs. cRaSh-TABLE). Results revealed that the lowercase and alternating case primes were equally effective at producing an identity priming effect. This finding demonstrates that case alternation does not hinder the initial access to the abstract lexical representations during visual-word recognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. WORD PROCESSING AND SECOND LANGUAGE WRITING: A LONGITUDINAL CASE STUDY

    Directory of Open Access Journals (Sweden)

    Alister Cumming

    2001-12-01

    Full Text Available The purpose of this study was to determine whether word processing might change a second language (L2 leamer's writing processes and improve the quality of his essays over a relatively long period of time. We worked from the assumption that research comparing word-processing to pen and paper composing tends to show positive results when studies include lengthy terms of data collection and when appropriate instruction and training are provided. We compared the processes and products of L2 composing displayed by a 29-year-old, male Mandarin leamer of English with intermediate proficiency in English while he wrote, over 8 months, 14 compositions grouped into 7 comparable pairs of topics altemating between uses of a lap-top computer and of pen and paper. Al1 keystrokes were recorded electronically in the computer environrnent; visual records of al1 text changes were made for the pen-and paper writing. Think-aloud protocols were recorded in al1 sessions. Analyses indicate advantages for the word-processing medium over the pen-and-paper medium in terms ofi a greater frequency of revisions made at the discourse level and at the syntactical level; higher scores for content on analytic ratings of the completed compositions; and more extensive evaluation ofwritten texts in think-aloud verbal reports.

  4. Feature activation during word recognition: action, visual, and associative-semantic priming effects

    Directory of Open Access Journals (Sweden)

    Kevin J.Y. Lam

    2015-05-01

    Full Text Available Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1 action features, (2 visual features, or (3 semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100 ms, 250 ms, 400 ms, and 1,000 ms to determine the relative time course of the different features . Notably, action priming effects were found in ISIs of 100 ms, 250 ms, and 1,000 ms whereas a visual priming effect was seen only in the ISI of 1,000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1 demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2 provides new evidence for embodied theories of language.

  5. The role of visual representations during the lexical access of spoken words.

    Science.gov (United States)

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting

    KAUST Repository

    Wang, Jingyan

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights. © 2011 IEEE.

  7. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting

    KAUST Repository

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-01-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights. © 2011 IEEE.

  8. Exploring the word superiority effect using TVA

    DEFF Research Database (Denmark)

    Starrfelt, Randi

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. It is unclear, however, if this is due to a lower threshold for perc...... simultaneously we find a different pattern: In a whole report experiment with six stimuli (letters or words), letters are perceived more easily than words, and this is reflected both in perceptual processing speed and short term memory capacity....... for perception of words, or a higher speed of processing for words than letters. We have investigated the WSE using methods based on a Theory of Visual Attention. In an experiment using single stimuli (words or letters) presented centrally, we show that the classical WSE is specifically reflected in perceptual...

  9. The Dilemma of Word Processing

    Science.gov (United States)

    Kidwell, Richard

    1977-01-01

    Word processing is a system of communicating which suggests heavy dependence on the use of transcribing machines rather than manual shorthand. The pros and cons of this system are noted, including suggestions for changes in the business education curriculum relevant to the need for shorthand and/or word processing skill development. (SH)

  10. Emotional Picture and Word Processing: An fMRI Study on Effects of Stimulus Complexity

    Science.gov (United States)

    Schlochtermeier, Lorna H.; Kuchinke, Lars; Pehrs, Corinna; Urton, Karolina; Kappelhoff, Hermann; Jacobs, Arthur M.

    2013-01-01

    Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity. PMID:23409009

  11. Implicit and explicit processing of kanji and kana words and non-words studied with fMRI.

    Science.gov (United States)

    Thuy, Dinh Ha Duy; Matsuo, Kayako; Nakamura, Kimihiro; Toma, Keiichiro; Oga, Tatsuhide; Nakai, Toshiharu; Shibasaki, Hiroshi; Fukuyama, Hidenao

    2004-11-01

    Using functional magnetic resonance imaging (fMRI), we investigated the implicit language processing of kanji and kana words (i.e., hiragana transcriptions of normally written kanji words) and non-words. Twelve right-handed native Japanese speakers performed size judgments for character stimuli (implicit language task for linguistic stimuli), size judgments for scrambled-character stimuli (implicit language task for non-linguistic stimuli), and lexical decisions (explicit language task). The size judgments for scrambled-kanji stimuli and scrambled-kana stimuli produced activations on the bilateral lingual gyri (BA 18), the bilateral occipitotemporal regions (BA 19/37), and the bilateral superior and inferior parietal cortices (BA 7/40). Interestingly, besides these areas, activations of the left inferior frontal region (Broca's area, BA 44/45) and the left posterior inferior temporal cortex (PITC, BA 37), which have been considered as language areas, were additionally activated during size judgment for kanji character stimuli. Size judgment for kana character stimuli also activated Broca's area, the left PITC, and the left supramarginal gyrus (SMG, BA 40). The activations of these language areas were replicated in the lexical decisions for both kanji and kana. These findings suggest that language processing of both kanji and kana scripts is obligatory to literate Japanese subjects. Moreover, comparison between the scrambled kanji and the scrambled kana showed no activation in the language areas, while greater activation in the bilateral fusiform gyri (left-side predominant) was found in kanji vs. kana comparison during the size judgment and the lexical decision. Kana minus kanji activated the left SMG during the size judgment, and Broca's area and the left middle/superior temporal junction during the lexical decision. These results probably reflect that in implicit or explicit reading of kanji words and kana words (i.e., hiragana transcriptions of kanji words

  12. Action word Related to Walk Heard by the Ears Activates Visual Cortex and Superior Temporal Gyrus: An fMRI Study

    Directory of Open Access Journals (Sweden)

    Naoyuki Osaka

    2012-10-01

    Full Text Available Cognitive neuroscience of language of action processing is one of the interesting issues on the cortical “seat” of word meaning and related action (Pulvermueller, 1999 Behavioral Brain Sciences 22 253–336. For example, generation of action verbs referring to various arm or leg actions (e.g., pick or kick differentially activate areas along the motor strip that overlap with those areas activated by actual movement of the fingers or feet (Hauk et al., 2004 Neuron 41 301–307. Meanwhile, mimic words like onomatopoeia have the other potential to selectively and strongly stimulate specific brain regions having a specified “seat” of action meaning. In fact, mimic words highly suggestive of laughter and gaze significantly activated the extrastriate visual /premotor cortices and the frontal eye field, respectively (Osaka et al., 2003 Neuroscience Letters 340 127–130; 2009 Neuroscience Letters 461 65–68. However, the role of a mimic word related to walk on specific brain regions has not yet been investigated. The present study showed that a mimic word highly suggestive of human walking, heard by the ears with eyes closed, significantly activated the visual cortex located in extrastriate cortex and superior temporal gyrus while hearing non-sense words that did not imply walk under the same task did not activate these areas. These areas would be a critical region for generating visual images of walking and related action.

  13. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    Science.gov (United States)

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  14. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    Science.gov (United States)

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  15. Mapping the Words: Experimental visualizations of translation structures between Ancient Greek and Classical Arabic

    Directory of Open Access Journals (Sweden)

    Torsten Roeder

    2017-11-01

    Full Text Available The article deals with presentation forms of linguistic transformation processes from ancient Greek sources that were translated into classical Arabic from the 9th to 11th century AD. Various examples demonstrate how visualizations support the interpretation of corpus structures, lexical differentiation, grammatical transformation and translation processes for single lexemes in the database project Glossarium Graeco-Arabicum. The database contains about 100,000 manually collected word pairs (still growing from 76 texts and their translations. The article discusses how the project utilizes Sankey diagrams, tree maps, balloon charts, data grids and classical coordinate systems to point out specific aspects of the data. Visualizations not only help beginners to understand the corpus structure, they also help editors and specialized users to identify specific phenomena. A well-documented interface design is crucial both for usability and interpretative work.

  16. The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words

    Science.gov (United States)

    Hoedemaker, Renske S.; Gordon, Peter C.

    2016-01-01

    In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394

  17. Food words distract the hungry: Evidence of involuntary semantic processing of task-irrelevant but biologically-relevant unexpected auditory words.

    Science.gov (United States)

    Parmentier, Fabrice B R; Pacheco-Unguetti, Antonia P; Valero, Sara

    2018-01-01

    Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants' biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants' biological needs.

  18. Processing negative valence of word pairs that include a positive word.

    Science.gov (United States)

    Itkes, Oksana; Mashal, Nira

    2016-09-01

    Previous research has suggested that cognitive performance is interrupted by negative relative to neutral or positive stimuli. We examined whether negative valence affects performance at the word or phrase level. Participants performed a semantic decision task on word pairs that included either a negative or a positive target word. In Experiment 1, the valence of the target word was congruent with the overall valence conveyed by the word pair (e.g., fat kid). As expected, response times were slower in the negative condition relative to the positive condition. Experiment 2 included target words that were incongruent with the overall valence of the word pair (e.g., fat salary). Response times were longer for word pairs whose overall valence was negative relative to positive, even though these word pairs included a positive word. Our findings support the Cognitive Primacy Hypothesis, according to which emotional valence is extracted after conceptual processing is complete.

  19. Short-Term and Long-Term Effects on Visual Word Recognition

    Science.gov (United States)

    Protopapas, Athanassios; Kapnoula, Efthymia C.

    2016-01-01

    Effects of lexical and sublexical variables on visual word recognition are often treated as homogeneous across participants and stable over time. In this study, we examine the modulation of frequency, length, syllable and bigram frequency, orthographic neighborhood, and graphophonemic consistency effects by (a) individual differences, and (b) item…

  20. Development of Embodied Word Meanings: Sensorimotor Effects in Children’s Lexical Processing

    Directory of Open Access Journals (Sweden)

    Michelle eInkster

    2016-03-01

    Full Text Available Previous research showed an effect of words’ rated body-object interaction (BOI in children’s visual word naming performance, but only in children 8 years of age or older (Wellsby & Pexman, 2014a. In that study, however, BOI was established using adult ratings. Here we collected ratings from a group of parents for children’s body-object interaction experience (child-BOI. We examined effects of words’ child-BOI and also words’ imageability on children’s responses in an auditory word naming task, which is suited to the lexical processing skills of younger children. We tested a group of 54 children aged 6-7 years and a comparison group of 25 adults. Results showed significant effects of both imageability and child-BOI on children’s auditory naming latencies. These results provide evidence that children younger than 8 years of age have richer semantic representations for high imageability and high child-BOI words, consistent with an embodied account of word meaning.

  1. Top-down and bottom-up influences on the left ventral occipito-temporal cortex during visual word recognition: an analysis of effective connectivity.

    Science.gov (United States)

    Schurz, Matthias; Kronbichler, Martin; Crone, Julia; Richlan, Fabio; Klackl, Johannes; Wimmer, Heinz

    2014-04-01

    The functional role of the left ventral occipito-temporal cortex (vOT) in visual word processing has been studied extensively. A prominent observation is higher activation for unfamiliar but pronounceable letter strings compared to regular words in this region. Some functional accounts have interpreted this finding as driven by top-down influences (e.g., Dehaene and Cohen [2011]: Trends Cogn Sci 15:254-262; Price and Devlin [2011]: Trends Cogn Sci 15:246-253), while others have suggested a difference in bottom-up processing (e.g., Glezer et al. [2009]: Neuron 62:199-204; Kronbichler et al. [2007]: J Cogn Neurosci 19:1584-1594). We used dynamic causal modeling for fMRI data to test bottom-up and top-down influences on the left vOT during visual processing of regular words and unfamiliar letter strings. Regular words (e.g., taxi) and unfamiliar letter strings of pseudohomophones (e.g., taksi) were presented in the context of a phonological lexical decision task (i.e., "Does the item sound like a word?"). We found no differences in top-down signaling, but a strong increase in bottom-up signaling from the occipital cortex to the left vOT for pseudohomophones compared to words. This finding can be linked to functional accounts which assume that the left vOT contains neurons tuned to complex orthographic features such as morphemes or words [e.g., Dehaene and Cohen [2011]: Trends Cogn Sci 15:254-262; Kronbichler et al. [2007]: J Cogn Neurosci 19:1584-1594]: For words, bottom-up signals converge onto a matching orthographic representation in the left vOT. For pseudohomophones, the propagated signals do not converge, but (partially) activate multiple orthographic word representations, reflected in increased effective connectivity. Copyright © 2013 Wiley Periodicals, Inc.

  2. MODULATION OF EVENT-RELATED POTENTIALS BY WORD REPETITION - THE ROLE OF VISUAL SELECTIVE ATTENTION

    NARCIS (Netherlands)

    OTTEN, LJ; RUGG, MD; DOYLE, MC

    1993-01-01

    Event-related potentials (ERPs) were recorded while subjects viewed visually presented words, some of which occurred twice. Each trial consisted of two colored letter strings, the requirement being to attend to and make a word/nonword discrimination for one of the strings. Attention was manipulated

  3. Serial and parallel processing in reading: investigating the effects of parafoveal orthographic information on nonisolated word recognition.

    Science.gov (United States)

    Dare, Natasha; Shillcock, Richard

    2013-01-01

    We present a novel lexical decision task and three boundary paradigm eye-tracking experiments that clarify the picture of parallel processing in word recognition in context. First, we show that lexical decision is facilitated by associated letter information to the left and right of the word, with no apparent hemispheric specificity. Second, we show that parafoveal preview of a repeat of word n at word n + 1 facilitates reading of word n relative to a control condition with an unrelated word at word n + 1. Third, using a version of the boundary paradigm that allowed for a regressive eye movement, we show no parafoveal "postview" effect on reading word n of repeating word n at word n - 1. Fourth, we repeat the second experiment but compare the effects of parafoveal previews consisting of a repeated word n with a transposed central bigram (e.g., caot for coat) and a substituted central bigram (e.g., ceit for coat), showing the latter to have a deleterious effect on processing word n, thereby demonstrating that the parafoveal preview effect is at least orthographic and not purely visual.

  4. Electrophysiological assessment of the time course of bilingual visual word recognition: Early access to language membership.

    Science.gov (United States)

    Yiu, Loretta K; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2015-08-01

    Previous research examining the time course of lexical access during word recognition suggests that phonological processing precedes access to semantic information, which in turn precedes access to syntactic information. Bilingual word recognition likely requires an additional level: knowledge of which language a specific word belongs to. Using the recording of event-related potentials, we investigated the time course of access to language membership information relative to semantic (Experiment 1) and syntactic (Experiment 2) encoding during visual word recognition. In Experiment 1, Spanish-English bilinguals viewed a series of printed words while making dual-choice go/nogo and left/right hand decisions based on semantic (whether the word referred to an animal or an object) and language membership information (whether the word was in English or in Spanish). Experiment 2 used a similar paradigm but with syntactic information (whether the word was a noun or a verb) as one of the response contingencies. The onset and peak latency of the N200, a component related to response inhibition, indicated that language information is accessed earlier than semantic information. Similarly, language information was also accessed earlier than syntactic information (but only based on peak latency). We discuss these findings with respect to models of bilingual word recognition and language comprehension in general. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Tracking the emergence of the consonant bias in visual-word recognition: evidence with developing readers.

    Science.gov (United States)

    Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat

    2014-01-01

    Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.

  6. Assessing the Usefulness of Google Books’ Word Frequencies for Psycholinguistic Research on Word Processing

    Science.gov (United States)

    Brysbaert, Marc; Keuleers, Emmanuel; New, Boris

    2011-01-01

    In this Perspective Article we assess the usefulness of Google's new word frequencies for word recognition research (lexical decision and word naming). We find that, despite the massive corpus on which the Google estimates are based (131 billion words from books published in the United States alone), the Google American English frequencies explain 11% less of the variance in the lexical decision times from the English Lexicon Project (Balota et al., 2007) than the SUBTLEX-US word frequencies, based on a corpus of 51 million words from film and television subtitles. Further analyses indicate that word frequencies derived from recent books (published after 2000) are better predictors of word processing times than frequencies based on the full corpus, and that word frequencies based on fiction books predict word processing times better than word frequencies based on the full corpus. The most predictive word frequencies from Google still do not explain more of the variance in word recognition times of undergraduate students and old adults than the subtitle-based word frequencies. PMID:21713191

  7. Orthographic units in the absence of visual processing: Evidence from sublexical structure in braille.

    Science.gov (United States)

    Fischer-Baum, Simon; Englebretson, Robert

    2016-08-01

    Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Visual Processing of Verbal and Nonverbal Stimuli in Adolescents with Reading Disabilities.

    Science.gov (United States)

    Boden, Catherine; Brodeur, Darlene A.

    1999-01-01

    A study investigated whether 32 adolescents with reading disabilities (RD) were slower at processing visual information compared to children of comparable age and reading level, or whether their deficit was specific to the written word. Adolescents with RD demonstrated difficulties in processing rapidly presented verbal and nonverbal visual…

  9. The Embroidered Word: A Stitchery Overview for Visual Arts Education

    Science.gov (United States)

    Julian, June

    2012-01-01

    This historical research provides an examination of the embroidered word as a visual art piece, from early traditional examples to contemporary forms. It is intended to encourage appreciation of embroidery as an art form and to stimulate discussion about the role of historical contexts in the studio education of artists at the university level.…

  10. Word Processing Job Descriptions and Duties.

    Science.gov (United States)

    Gajewski-Johnson, Marlyce

    In order to develop a word processing career file at Milwaukee Area Technical College, employment managers at 124 Milwaukee-area businesses were asked to provide job descriptions for all word processing positions in the company; skill and knowledge requirements necessary to obtain these positions; employee appraisal forms; wage scales; a list of…

  11. Distance-dependent processing of pictures and words.

    Science.gov (United States)

    Amit, Elinor; Algom, Daniel; Trope, Yaacov

    2009-08-01

    A series of 8 experiments investigated the association between pictorial and verbal representations and the psychological distance of the referent objects from the observer. The results showed that people better process pictures that represent proximal objects and words that represent distal objects than pictures that represent distal objects and words that represent proximal objects. These results were obtained with various psychological distance dimensions (spatial, temporal, and social), different tasks (classification and categorization), and different measures (speed of processing and selective attention). The authors argue that differences in the processing of pictures and words emanate from the physical similarity of pictures, but not words, to the referents. Consequently, perceptual analysis is commonly applied to pictures but not to words. Pictures thus impart a sense of closeness to the referent objects and are preferably used to represent such objects, whereas words do not convey proximity and are preferably used to represent distal objects in space, time, and social perspective.

  12. Get rich quick: the signal to respond procedure reveals the time course of semantic richness effects during visual word recognition.

    Science.gov (United States)

    Hargreaves, Ian S; Pexman, Penny M

    2014-05-01

    According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Examining the direct and indirect effects of visual-verbal paired associate learning on Chinese word reading.

    Science.gov (United States)

    Georgiou, George; Liu, Cuina; Xu, Shiyang

    2017-08-01

    Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Processing of Color Words Activates Color Representations

    Science.gov (United States)

    Richter, Tobias; Zwaan, Rolf A.

    2009-01-01

    Two experiments were conducted to investigate whether color representations are routinely activated when color words are processed. Congruency effects of colors and color words were observed in both directions. Lexical decisions on color words were faster when preceding colors matched the color named by the word. Color-discrimination responses…

  15. Walk-related mimic word activates the extrastriate visual cortex in the human brain: an fMRI study.

    Science.gov (United States)

    Osaka, Naoyuki

    2009-03-02

    I present an fMRI study demonstrating that a mimic word highly suggestive of human walking, heard by the ear with eyes closed, significantly activates the visual cortex located in extrastriate occipital region (BA19, 18) and superior temporal sulcus (STS) while hearing non-sense words that do not imply walk under the same task does not activate these areas in humans. I concluded that BA19 and 18 would be a critical region for generating visual images of walking and related intentional stance, respectively, evoked by an onomatopoeia word that implied walking.

  16. Comparing different kinds of words and word-word relations to test an habituation model of priming.

    Science.gov (United States)

    Rieth, Cory A; Huber, David E

    2017-06-01

    Huber and O'Reilly (2003) proposed that neural habituation exists to solve a temporal parsing problem, minimizing blending between one word and the next when words are visually presented in rapid succession. They developed a neural dynamics habituation model, explaining the finding that short duration primes produce positive priming whereas long duration primes produce negative repetition priming. The model contains three layers of processing, including a visual input layer, an orthographic layer, and a lexical-semantic layer. The predicted effect of prime duration depends both on this assumed representational hierarchy and the assumption that synaptic depression underlies habituation. The current study tested these assumptions by comparing different kinds of words (e.g., words versus non-words) and different kinds of word-word relations (e.g., associative versus repetition). For each experiment, the predictions of the original model were compared to an alternative model with different representational assumptions. Experiment 1 confirmed the prediction that non-words and inverted words require longer prime durations to eliminate positive repetition priming (i.e., a slower transition from positive to negative priming). Experiment 2 confirmed the prediction that associative priming increases and then decreases with increasing prime duration, but remains positive even with long duration primes. Experiment 3 replicated the effects of repetition and associative priming using a within-subjects design and combined these effects by examining target words that were expected to repeat (e.g., viewing the target word 'BACK' after the prime phrase 'back to'). These results support the originally assumed representational hierarchy and more generally the role of habituation in temporal parsing and priming. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. The processing of blend words in naming and sentence reading.

    Science.gov (United States)

    Johnson, Rebecca L; Slate, Sarah Rose; Teevan, Allison R; Juhasz, Barbara J

    2018-04-01

    Research exploring the processing of morphologically complex words, such as compound words, has found that they are decomposed into their constituent parts during processing. Although much is known about the processing of compound words, very little is known about the processing of lexicalised blend words, which are created from parts of two words, often with phoneme overlap (e.g., brunch). In the current study, blends were matched with non-blend words on a variety of lexical characteristics, and blend processing was examined using two tasks: a naming task and an eye-tracking task that recorded eye movements during reading. Results showed that blend words were processed more slowly than non-blend control words in both tasks. Blend words led to longer reaction times in naming and longer processing times on several eye movement measures compared to non-blend words. This was especially true for blends that were long, rated low in word familiarity, but were easily recognisable as blends.

  18. Don't words come easy? A psychophysical exploration of word superiority

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe Allerup

    2013-01-01

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. We compare performance with letters and words in three experiments, ...... and visual short term memory capacity. So, even if single words come easy, there is a limit to the word superiority effect....

  19. A familiar font drives early emotional effects in word recognition.

    Science.gov (United States)

    Kuchinke, Lars; Krause, Beatrix; Fritsch, Nathalie; Briesemeister, Benny B

    2014-10-01

    The emotional connotation of a word is known to shift the process of word recognition. Using the electroencephalographic event-related potentials (ERPs) approach it has been documented that early attentional processing of high-arousing negative words is shifted at a stage of processing where a presented word cannot have been fully identified. Contextual learning has been discussed to contribute to these effects. The present study shows that a manipulation of the familiarity with a word's shape interferes with these earliest emotional ERP effects. Presenting high-arousing negative and neutral words in a familiar or an unfamiliar font results in very early emotion differences only in case of familiar shapes, whereas later processing stages reveal similar emotional effects in both font conditions. Because these early emotion-related differences predict later behavioral differences, it is suggested that contextual learning of emotional valence comprises more visual features than previously expected to guide early visual-sensory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Parallel language activation during word processing in bilinguals: Evidence from word production in sentence context

    NARCIS (Netherlands)

    Starreveld, P.A.; de Groot, A.M.B.; Rossmark, B.M.M.; van Hell, J.G.

    2014-01-01

    In two picture-naming experiments we examined whether bilinguals co-activate the non-target language during word production in the target language. The pictures were presented out-of-context (Experiment 1) or in visually presented sentence contexts (Experiment 2). In both experiments different

  1. Phonological Processes in Complex and Compound Words

    Directory of Open Access Journals (Sweden)

    Alieh Kord Zaferanlu Kambuziya

    2016-02-01

    Full Text Available Abstract This research at making a comparison between phonological processes in complex and compound Persian words. Data are gathered from a 40,000-word Persian dictionary. To catch some results, 4,034 complex words and 1,464 compound ones are chosen. To count the data, "excel" software is used. Some results of the research are: 1- "Insertion" is the usual phonological process in complex words. More than half of different insertions belongs to the consonant /g/. Then /y/ and // are in the second and the third order. The consonant /v/ has the least percentage of all. The most percentage of vowel insertion belongs to /e/. The vowels /a/ and /o/ are in the second and third order. Deletion in complex words can only be seen in consonant /t/ and vowel /e/. 2- The most frequent phonological processes in compounds is consonant deletion. In this process, seven different consonants including /t/, //, /m/, /r/, / ǰ/, /d, and /c/. The only deleted vowel is /e/. In both groups of complex and compound, /t/ deletion can be observed. A sequence of three consonants paves the way for the deletion of one of the consonants, if one of the sequences is a sonorant one like /n/, the deletion process rarely happens. 3- In complex words, consonant deletion causes a lighter syllable weight, whereas vowel deletion causes a heavier syllable weight. So, both of the processes lead to bi-moraic weight. 4- The production of bi-moraic syllable in Persian is preferable to Syllable Contact Law. So, Specific Rules have precedence to Universals. 5- Vowel insertion can be seen in both groups of complex and compound words. In complex words, /e/ insertion has the most fundamental part. The vowels /a/ and /o/ are in the second and third place. Whenever there are two sequences of ultra-heavy syllables. By vowel insertion, the first syllable is broken into two light syllables. The compounds that are influenced by vowel insertion, can be and are pronounced without any insertion

  2. Spelling pronunciation and visual preview both facilitate learning to spell irregular words.

    Science.gov (United States)

    Hilte, Maartje; Reitsma, Pieter

    2006-12-01

    Spelling pronunciations are hypothesized to be helpful in building up relatively stable phonologically underpinned orthographic representations, particularly for learning words with irregular phoneme-grapheme correspondences. In a four-week computer-based training, the efficacy of spelling pronunciations and previewing the spelling patterns on learning to spell loan words in Dutch, originating from French and English, was examined in skilled and less skilled spellers with varying ages. Reading skills were taken into account. Overall, compared to normal pronunciation, spelling pronunciation facilitated the learning of the correct spelling of irregular words, but it appeared to be no more effective than previewing. Differences between training conditions appeared to fade with older spellers. Less skilled young spellers seemed to profit more from visual examination of the word as compared to practice with spelling pronunciations. The findings appear to indicate that spelling pronunciation and allowing a preview can both be effective ways to learn correct spellings of orthographically unpredictable words, irrespective of age or spelling ability.

  3. "LITTLE TRAGEDIES": THE POLYPHONY OF MUSIC, WORDS AND VISUAL IMAGERY

    Directory of Open Access Journals (Sweden)

    Nikolaeva Julia E.

    2015-01-01

    Full Text Available The music for three-part television movie Little Tragedies (1979 on Pushkin’s literature works (directed by M.Schweitzer, music composed by A.Schnittke has been investigated. The trinity of music, poetic words and visual imagery, and their amazing consistency and reciprocal functioning has been considered in aspect of polyphony as the universal logical principle of building an art form. All the music of the TV movie grows out of two leitmotifs. And theirs varied implementation in the film is exemplified on examples of polyphonic analysis (music/words/images of fragments from the four main film sections, such as "Scene from Faust", "Mozart and Salieri", "The Covetous Knight", and "A Feast in Time of Plague".

  4. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    Science.gov (United States)

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  5. Broca's region and Visual Word Form Area activation differ during a predictive Stroop task

    DEFF Research Database (Denmark)

    Wallentin, Mikkel; Gravholt, Claus Højbjerg; Skakkebæk, Anne

    2015-01-01

    displayed in green or red (incongruent vs congruent colors). One of the colors, however, was presented three times as often as the other, making it possible to study both congruency and frequency effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible...... to study frequency effects across modalities. We found significant behavioral effects of both incongruency and frequency. A significant effect (p effect of frequency was observed and no interaction. Conjoined effects of incongruency...... and frequency were found in parietal regions as well as in the Visual Word Form Area (VWFA). No interaction between perceptual modality and frequency was found in VWFA suggesting that the region is not strictly visual. These findings speak against a strong version of the prediction error processing hypothesis...

  6. The development of written word processing: the case of deaf children The development of written word processing: the case of deaf children

    Directory of Open Access Journals (Sweden)

    Jacqueline Leybaert

    2008-04-01

    Full Text Available Reading is a highly complex, flexible and sophisticated cognitive activity, and word recognition constitutes only a small and limited part of the whole process. It seems however that for various reasons, word recognition is worth studying separately from other components. Considering that writing systems are secondary codes representing the language, word recognition mechanisms may appear as an interface between printed material and general language capabilities, and thus, specific difficulties in reading and spelling acquisition should be iodated at the level of isolated word identification (see e. g. Crowder, 1982 for discussion. Moreover, it appears that a prominent characteristic of poor readers is their lack of efficiency in the processing of isolated words (Mitche11,1982; Stanovich, 1982. And finally, word recognition seems to be a more automatic and less controlled component of the whole reading process. Reading is a highly complex, flexible and sophisticated cognitive activity, and word recognition constitutes only a small and limited part of the whole process. It seems however that for various reasons, word recognition is worth studying separately from other components. Considering that writing systems are secondary codes representing the language, word recognition mechanisms may appear as an interface between printed material and general language capabilities, and thus, specific difficulties in reading and spelling acquisition should be iodated at the level of isolated word identification (see e. g. Crowder, 1982 for discussion. Moreover, it appears that a prominent characteristic of poor readers is their lack of efficiency in the processing of isolated words (Mitche11,1982; Stanovich, 1982. And finally, word recognition seems to be a more automatic and less controlled component of the whole reading process.

  7. The Effect of Semantic Transparency on the Processing of Morphologically Derived Words: Evidence from Decision Latencies and Event-Related Potentials

    Science.gov (United States)

    Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F.

    2017-01-01

    Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these…

  8. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  9. Measuring, Predicting and Visualizing Short-Term Change in Word Representation and Usage in VKontakte Social Network

    OpenAIRE

    Stewart, Ian; Arendt, Dustin; Bell, Eric; Volkova, Svitlana

    2017-01-01

    Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. Such dynamics are especially notable during a period of crisis. This work addresses several important tasks of measuring, visualizing and predicting short term text representation shift, i.e. the change in a word's contextual semantics, and contrasting such shift with surface level word dynamics, or concept drift, observed in social media streams. ...

  10. Right-hemispheric processing of non-linguistic word features: implications for mapping language recovery after stroke.

    Science.gov (United States)

    Baumgaertner, Annette; Hartwigsen, Gesa; Roman Siebner, Hartwig

    2013-06-01

    Verbal stimuli often induce right-hemispheric activation in patients with aphasia after left-hemispheric stroke. This right-hemispheric activation is commonly attributed to functional reorganization within the language system. Yet previous evidence suggests that functional activation in right-hemispheric homologues of classic left-hemispheric language areas may partly be due to processing nonlinguistic perceptual features of verbal stimuli. We used functional MRI (fMRI) to clarify the role of the right hemisphere in the perception of nonlinguistic word features in healthy individuals. Participants made perceptual, semantic, or phonological decisions on the same set of auditorily and visually presented word stimuli. Perceptual decisions required judgements about stimulus-inherent changes in font size (visual modality) or fundamental frequency contour (auditory modality). The semantic judgement required subjects to decide whether a stimulus is natural or man-made; the phonologic decision required a decision on whether a stimulus contains two or three syllables. Compared to phonologic or semantic decision, nonlinguistic perceptual decisions resulted in a stronger right-hemispheric activation. Specifically, the right inferior frontal gyrus (IFG), an area previously suggested to support language recovery after left-hemispheric stroke, displayed modality-independent activation during perceptual processing of word stimuli. Our findings indicate that activation of the right hemisphere during language tasks may, in some instances, be driven by a "nonlinguistic perceptual processing" mode that focuses on nonlinguistic word features. This raises the possibility that stronger activation of right inferior frontal areas during language tasks in aphasic patients with left-hemispheric stroke may at least partially reflect increased attentional focus on nonlinguistic perceptual aspects of language. Copyright © 2012 Wiley Periodicals, Inc.

  11. Visual determinants of reduced performance on the Stroop color-word test in normal aging individuals.

    Science.gov (United States)

    van Boxtel, M P; ten Tusscher, M P; Metsemakers, J F; Willems, B; Jolles, J

    2001-10-01

    It is unknown to what extent the performance on the Stroop color-word test is affected by reduced visual function in older individuals. We tested the impact of common deficiencies in visual function (reduced distant and close acuity, reduced contrast sensitivity, and color weakness) on Stroop performance among 821 normal individuals aged 53 and older. After adjustment for age, sex, and educational level, low contrast sensitivity was associated with more time needed on card I (word naming), red/green color weakness with slower card 2 performance (color naming), and reduced distant acuity with slower performance on card 3 (interference). Half of the age-related variance in speed performance was shared with visual function. The actual impact of reduced visual function may be underestimated in this study when some of this age-related variance in Stroop performance is mediated by visual function decrements. It is suggested that reduced visual function has differential effects on Stroop performance which need to be accounted for when the Stroop test is used both in research and in clinical settings. Stroop performance measured from older individuals with unknown visual status should be interpreted with caution.

  12. Depth of word processing in Alzheimer patients and normal controls: a magnetoencephalographic (MEG) study.

    Science.gov (United States)

    Walla, P; Püregger, E; Lehrner, J; Mayer, D; Deecke, L; Dal Bianco, P

    2005-05-01

    Effects related to depth of verbal information processing were investigated in probable Alzheimer's disease patients (AD) and age matched controls. During word encoding sessions 10 patients and 10 controls had either to decide whether the letter "s" appeared in visually presented words (alphabetical decision, shallow encoding), or whether the meaning of each presented word was animate or inanimate (lexical decision, deep encoding). These encoding sessions were followed by test sessions during which all previously encoded words were presented again together with the same number of new words. The task was then to discriminate between repeated and new words. Magnetic field changes related to brain activity were recorded with a whole cortex MEG.5 probable AD patients showed recognition performances above chance level related to both depths of information processing. Those patients and 5 age matched controls were then further analysed. Recognition performance was poorer in probable AD patients compared to controls for both levels of processing. However, in both groups deep encoding led to a higher recognition performance than shallow encoding. We therefore conclude that the performance reduction in the patient group was independent of depth of processing. Reaction times related to false alarms differed between patients and controls after deep encoding which perhaps could already be used for supporting an early diagnosis. The analysis of the physiological data revealed significant differences between correctly recognised repetitions and correctly classified new words (old/new-effect) in the control group which were missing in the patient group after deep encoding. The lack of such an effect in the patient group is interpreted as being due to the respective neuropathology related to probable AD. The present results demonstrate that magnetic field recordings represent a useful tool to physiologically distinguish between probable AD and age matched controls.

  13. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    Science.gov (United States)

    Lerner, Itamar; Armstrong, Blair C.; Frost, Ram

    2014-01-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  14. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    Science.gov (United States)

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  15. Spatial attention in written word perception

    Directory of Open Access Journals (Sweden)

    Veronica eMontani

    2014-02-01

    Full Text Available The role of attention in visual word recognition and reading aloud is a long debated issue. Studies of both developmental and acquired reading disorders provide growing evidence that spatial attention is critically involved in word reading, in particular for the phonological decoding of unfamiliar letter strings. However, studies on healthy participants have produced contrasting results. The aim of this study was to investigate how the allocation of spatial attention may influence the perception of letter strings in skilled readers. High frequency words, low frequency words and pseudowords were briefly and parafoveally presented either in the left or the right visual field. Attentional allocation was modulated by the presentation of a spatial cue before the target string. Accuracy in reporting the target string was modulated by the spatial cue but this effect varied with the type of string. For unfamiliar strings, processing was facilitated when attention was focused on the string location and hindered when it was diverted from the target. This finding is consistent the assumptions of the CDP+ model of reading aloud, as well as with familiarity sensitivity models that argue for a flexible use of attention according with the specific requirements of the string. Moreover, we found that processing of high-frequency words was facilitated by an extra-large focus of attention. The latter result is consistent with the hypothesis that a broad distribution of attention is the default mode during reading of familiar words because it might optimally engage the broad receptive fields of the highest detectors in the hierarchical system for visual word recognition.

  16. Individual differences in emotion word processing: A diffusion model analysis.

    Science.gov (United States)

    Mueller, Christina J; Kuchinke, Lars

    2016-06-01

    The exploratory study investigated individual differences in implicit processing of emotional words in a lexical decision task. A processing advantage for positive words was observed, and differences between happy and fear-related words in response times were predicted by individual differences in specific variables of emotion processing: Whereas more pronounced goal-directed behavior was related to a specific slowdown in processing of fear-related words, the rate of spontaneous eye blinks (indexing brain dopamine levels) was associated with a processing advantage of happy words. Estimating diffusion model parameters revealed that the drift rate (rate of information accumulation) captures unique variance of processing differences between happy and fear-related words, with highest drift rates observed for happy words. Overall emotion recognition ability predicted individual differences in drift rates between happy and fear-related words. The findings emphasize that a significant amount of variance in emotion processing is explained by individual differences in behavioral data.

  17. Semantic Neighborhood Effects for Abstract versus Concrete Words.

    Science.gov (United States)

    Danguecan, Ashley N; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.

  18. Spatial encoding of visual words for image classification

    Science.gov (United States)

    Liu, Dong; Wang, Shengsheng; Porikli, Fatih

    2016-05-01

    Appearance-based bag-of-visual words (BoVW) models are employed to represent the frequency of a vocabulary of local features in an image. Due to their versatility, they are widely popular, although they ignore the underlying spatial context and relationships among the features. Here, we present a unified representation that enhances BoVWs with explicit local and global structure models. Three aspects of our method should be noted in comparison to the previous approaches. First, we use a local structure feature that encodes the spatial attributes between a pair of points in a discriminative fashion using class-label information. We introduce a bag-of-structural words (BoSW) model for the given image set and describe each image with this model on its coarsely sampled relevant keypoints. We then combine the codebook histograms of BoVW and BoSW to train a classifier. Rigorous experimental evaluations on four benchmark data sets demonstrate that the unified representation outperforms the conventional models and compares favorably to more sophisticated scene classification techniques.

  19. Experience with compound words influences their processing: An eye movement investigation with English compound words.

    Science.gov (United States)

    Juhasz, Barbara J

    2016-11-14

    Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.

  20. Tracing the time course of picture--word processing.

    Science.gov (United States)

    Smith, M C; Magee, L E

    1980-12-01

    A number of independent lines of research have suggested that semantic and articulatory information become available differentially from pictures and words. The first of the experiments reported here sought to clarify the time course by which information about pictures and words becomes available by considering the pattern of interference generated when incongruent pictures and words are presented simultaneously in a Stroop-like situation. Previous investigators report that picture naming is easily disrupted by the presence of a distracting word but that word naming is relatively immune to interference from an incongruent picture. Under the assumption that information available from a completed process may disrupt an ongoing process, these results suggest that words access articulatory information more rapidly than do pictures. Experiment 1 extended this paradigm by requiring subjects to verify the category of the target stimulus. In accordance with the hypothesis that picture access the semantic code more rapidly than words, there was a reversal in the interference pattern: Word categorization suffered considerable disruption, whereas picture categorization was minimally affected by the presence of an incongruent word. Experiment 2 sought to further test the hypothesis that access to semantic and articulatory codes is different for pictures and words by examining memory for those items following naming or categorization. Categorized words were better recognized than named words, whereas the reverse was true for pictures, a result which suggests that picture naming involves more extensive processing than picture categorization. Experiment 3 replicated this result under conditions in which viewing time was held constant. The last experiment extended the investigation of memory differences to a situation in which subjects were required to generate the superordinate category name. Here, memory for categorized pictures was as good as memory for named pictures. Category

  1. Emotional noun processing: an ERP study with rapid serial visual presentation.

    Science.gov (United States)

    Yi, Shengnan; He, Weiqi; Zhan, Lei; Qi, Zhengyang; Zhu, Chuanlin; Luo, Wenbo; Li, Hong

    2015-01-01

    Reading is an important part of our daily life, and rapid responses to emotional words have received a great deal of research interest. Our study employed rapid serial visual presentation to detect the time course of emotional noun processing using event-related potentials. We performed a dual-task experiment, where subjects were required to judge whether a given number was odd or even, and the category into which each emotional noun fit. In terms of P1, we found that there was no negativity bias for emotional nouns. However, emotional nouns elicited larger amplitudes in the N170 component in the left hemisphere than did neutral nouns. This finding indicated that in later processing stages, emotional words can be discriminated from neutral words. Furthermore, positive, negative, and neutral words were different from each other in the late positive complex, indicating that in the third stage, even different emotions can be discerned. Thus, our results indicate that in a three-stage model the latter two stages are more stable and universal.

  2. Emotional noun processing: an ERP study with rapid serial visual presentation.

    Directory of Open Access Journals (Sweden)

    Shengnan Yi

    Full Text Available Reading is an important part of our daily life, and rapid responses to emotional words have received a great deal of research interest. Our study employed rapid serial visual presentation to detect the time course of emotional noun processing using event-related potentials. We performed a dual-task experiment, where subjects were required to judge whether a given number was odd or even, and the category into which each emotional noun fit. In terms of P1, we found that there was no negativity bias for emotional nouns. However, emotional nouns elicited larger amplitudes in the N170 component in the left hemisphere than did neutral nouns. This finding indicated that in later processing stages, emotional words can be discriminated from neutral words. Furthermore, positive, negative, and neutral words were different from each other in the late positive complex, indicating that in the third stage, even different emotions can be discerned. Thus, our results indicate that in a three-stage model the latter two stages are more stable and universal.

  3. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    Science.gov (United States)

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  4. Processing advantage for emotional words in bilingual speakers.

    Science.gov (United States)

    Ponari, Marta; Rodríguez-Cuadrado, Sara; Vinson, David; Fox, Neil; Costa, Albert; Vigliocco, Gabriella

    2015-10-01

    Effects of emotion on word processing are well established in monolingual speakers. However, studies that have assessed whether affective features of words undergo the same processing in a native and nonnative language have provided mixed results: Studies that have found differences between native language (L1) and second language (L2) processing attributed the difference to the fact that L2 learned late in life would not be processed affectively, because affective associations are established during childhood. Other studies suggest that adult learners show similar effects of emotional features in L1 and L2. Differences in affective processing of L2 words can be linked to age and context of learning, proficiency, language dominance, and degree of similarity between L2 and L1. Here, in a lexical decision task on tightly matched negative, positive, and neutral words, highly proficient English speakers from typologically different L1s showed the same facilitation in processing emotionally valenced words as native English speakers, regardless of their L1, the age of English acquisition, or the frequency and context of English use. (c) 2015 APA, all rights reserved).

  5. The Training of Morphological Decomposition in Word Processing and Its Effects on Literacy Skills

    Directory of Open Access Journals (Sweden)

    Irit Bar-Kochva

    2017-10-01

    Full Text Available This study set out to examine the effects of a morpheme-based training on reading and spelling in fifth and sixth graders (N = 47, who present poor literacy skills and speak German as a second language. A computerized training, consisting of a visual lexical decision task (comprising 2,880 items, presented in 12 sessions, was designed to encourage fast morphological analysis in word processing. The children were divided between two groups: the one underwent a morpheme-based training, in which word-stems of inflections and derivations were presented for a limited duration, while their pre- and suffixes remained on screen until response. Another group received a control training consisting of the same task, except that the duration of presentation of a non-morphological unit was restricted. In a Word Disruption Task, participants read words under three conditions: morphological separation (with symbols separating between the words’ morphemes, non-morphological separation (with symbols separating between non-morphological units of words, and no-separation (with symbols presented at the beginning and end of each word. The group receiving the morpheme-based program improved more than the control group in terms of word reading fluency in the morphological condition. The former group also presented similar word reading fluency after training in the morphological condition and in the no-separation condition, thereby suggesting that the morpheme-based training contributed to the integration of morphological decomposition into the process of word recognition. At the same time, both groups similarly improved in other measures of word reading fluency. With regard to spelling, the morpheme-based training group showed a larger improvement than the control group in spelling of trained items, and a unique improvement in spelling of untrained items (untrained word-stems integrated into trained pre- and suffixes. The results further suggest some contribution of

  6. The putative visual word form area is functionally connected to the dorsal attention network.

    Science.gov (United States)

    Vogel, Alecia C; Miezin, Fran M; Petersen, Steven E; Schlaggar, Bradley L

    2012-03-01

    The putative visual word form area (pVWFA) is the most consistently activated region in single word reading studies (i.e., Vigneau et al. 2006), yet its function remains a matter of debate. The pVWFA may be predominantly used in reading or it could be a more general visual processor used in reading but also in other visual tasks. Here, resting-state functional connectivity magnetic resonance imaging (rs-fcMRI) is used to characterize the functional relationships of the pVWFA to help adjudicate between these possibilities. rs-fcMRI defines relationships based on correlations in slow fluctuations of blood oxygen level-dependent activity occurring at rest. In this study, rs-fcMRI correlations show little relationship between the pVWFA and reading-related regions but a strong relationship between the pVWFA and dorsal attention regions thought to be related to spatial and feature attention. The rs-fcMRI correlations between the pVWFA and regions of the dorsal attention network increase with age and reading skill, while the correlations between the pVWFA and reading-related regions do not. These results argue the pVWFA is not used predominantly in reading but is a more general visual processor used in other visual tasks, as well as reading.

  7. Extending models of visual-word recognition to semicursive scripts: Evidence from masked priming in Uyghur.

    Science.gov (United States)

    Yakup, Mahire; Abliz, Wayit; Sereno, Joan; Perea, Manuel

    2015-12-01

    One basic feature of the Arabic script is its semicursive style: some letters are connected to the next, but others are not, as in the Uyghur word [see text]/ya xʃi/ ("good"). None of the current orthographic coding schemes in models of visual-word recognition, which were created for the Roman script, assign a differential role to the coding of within letter "chunks" and between letter "chunks" in words in the Arabic script. To examine how letter identity/position is coded at the earliest stages of word processing in the Arabic script, we conducted 2 masked priming lexical decision experiments in Uyghur, an agglutinative Turkic language. The target word was preceded by an identical prime, by a transposed-letter nonword prime (that either kept the ligation pattern or did not), or by a 2-letter replacement nonword prime. Transposed-letter primes were as effective as identity primes when the letter transposition in the prime kept the same ligation pattern as the target word (e.g., [see text]/inta_jin/-/itna_jin/), but not when the transposed-letter prime didn't keep the ligation pattern (e.g., [see text]/so_w_ʁa_t/-/so_ʁw_a_t/). Furthermore, replacement-letter primes were more effective when they kept the ligation pattern of the target word than when they did not (e.g., [see text]/so_d_ʧa_t/-/so_w_ʁa_t/ faster than [see text]/so_ʧd_a_t/-/so_w_ʁa_t/). We examined how input coding schemes could be extended to deal with the intricacies of semicursive scripts. (c) 2015 APA, all rights reserved).

  8. Phonological Contribution during Visual Word Recognition in Child Readers. An Intermodal Priming Study in Grades 3 and 5

    Science.gov (United States)

    Sauval, Karinne; Casalis, Séverine; Perre, Laetitia

    2017-01-01

    This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…

  9. Spatial attention in written word perception.

    Science.gov (United States)

    Montani, Veronica; Facoetti, Andrea; Zorzi, Marco

    2014-01-01

    The role of attention in visual word recognition and reading aloud is a long debated issue. Studies of both developmental and acquired reading disorders provide growing evidence that spatial attention is critically involved in word reading, in particular for the phonological decoding of unfamiliar letter strings. However, studies on healthy participants have produced contrasting results. The aim of this study was to investigate how the allocation of spatial attention may influence the perception of letter strings in skilled readers. High frequency words (HFWs), low frequency words and pseudowords were briefly and parafoveally presented either in the left or the right visual field. Attentional allocation was modulated by the presentation of a spatial cue before the target string. Accuracy in reporting the target string was modulated by the spatial cue but this effect varied with the type of string. For unfamiliar strings, processing was facilitated when attention was focused on the string location and hindered when it was diverted from the target. This finding is consistent the assumptions of the CDP+ model of reading aloud, as well as with familiarity sensitivity models that argue for a flexible use of attention according with the specific requirements of the string. Moreover, we found that processing of HFWs was facilitated by an extra-large focus of attention. The latter result is consistent with the hypothesis that a broad distribution of attention is the default mode during reading of familiar words because it might optimally engage the broad receptive fields of the highest detectors in the hierarchical system for visual word recognition.

  10. Transfer of L1 Visual Word Recognition Strategies during Early Stages of L2 Learning: Evidence from Hebrew Learners Whose First Language Is Either Semitic or Indo-European

    Science.gov (United States)

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2016-01-01

    The present study examined visual word recognition processes in Hebrew (a Semitic language) among beginning learners whose first language (L1) was either Semitic (Arabic) or Indo-European (e.g. English). To examine if learners, like native Hebrew speakers, exhibit morphological sensitivity to root and word-pattern morphemes, learners made an…

  11. Evidence for simultaneous syntactic processing of multiple words during reading

    NARCIS (Netherlands)

    Snell, Joshua; Meeter, Martijn; Grainger, Jonathan

    2017-01-01

    A hotly debated issue in reading research concerns the extent to which readers process parafoveal words, and how parafoveal information might influence foveal word recognition. We investigated syntactic word processing both in sentence reading and in reading isolated foveal words when these were

  12. Reading faces and Facing words

    DEFF Research Database (Denmark)

    Robotham, Julia Emma; Lindegaard, Martin Weis; Delfi, Tzvetelina Shentova

    unilateral lesions, we found no patient with a selective deficit in either reading or face processing. Rather, the patients showing a deficit in processing either words or faces were also impaired with the other category. One patient performed within the normal range on all tasks. In addition, all patients......It has long been argued that perceptual processing of faces and words is largely independent, highly specialised and strongly lateralised. Studies of patients with either pure alexia or prosopagnosia have strongly contributed to this view. The aim of our study was to investigate how visual...... perception of faces and words is affected by unilateral posterior stroke. Two patients with lesions in their dominant hemisphere and two with lesions in their non-dominant hemisphere were tested on sensitive tests of face and word perception during the stable phase of recovery. Despite all patients having...

  13. Using Serial and Discrete Digit Naming to Unravel Word Reading Processes.

    Science.gov (United States)

    Altani, Angeliki; Protopapas, Athanassios; Georgiou, George K

    2018-01-01

    During reading acquisition, word recognition is assumed to undergo a developmental shift from slow serial/sublexical processing of letter strings to fast parallel processing of whole word forms. This shift has been proposed to be detected by examining the size of the relationship between serial- and discrete-trial versions of word reading and rapid naming tasks. Specifically, a strong association between serial naming of symbols and single word reading suggests that words are processed serially, whereas a strong association between discrete naming of symbols and single word reading suggests that words are processed in parallel as wholes. In this study, 429 Grade 1, 3, and 5 English-speaking Canadian children were tested on serial and discrete digit naming and word reading. Across grades, single word reading was more strongly associated with discrete naming than with serial naming of digits, indicating that short high-frequency words are processed as whole units early in the development of reading ability in English. In contrast, serial naming was not a unique predictor of single word reading across grades, suggesting that within-word sequential processing was not required for the successful recognition for this set of words. Factor mixture analysis revealed that our participants could be clustered into two classes, namely beginning and more advanced readers. Serial naming uniquely predicted single word reading only among the first class of readers, indicating that novice readers rely on a serial strategy to decode words. Yet, a considerable proportion of Grade 1 students were assigned to the second class, evidently being able to process short high-frequency words as unitized symbols. We consider these findings together with those from previous studies to challenge the hypothesis of a binary distinction between serial/sublexical and parallel/lexical processing in word reading. We argue instead that sequential processing in word reading operates on a continuum

  14. Processing of visually presented clock times.

    Science.gov (United States)

    Goolkasian, P; Park, D C

    1980-11-01

    The encoding and representation of visually presented clock times was investigated in three experiments utilizing a comparative judgment task. Experiment 1 explored the effects of comparing times presented in different formats (clock face, digit, or word), and Experiment 2 examined angular distance effects created by varying positions of the hands on clock faces. In Experiment 3, encoding and processing differences between clock faces and digitally presented times were directly measured. Same/different reactions to digitally presented times were faster than to times presented on a clock face, and this format effect was found to be a result of differences in processing that occurred after encoding. Angular separation also had a limited effect on processing. The findings are interpreted within the framework of theories that refer to the importance of representational codes. The applicability to the data of Bank's semantic-coding theory, Paivio's dual-coding theory, and the levels-of-processing view of memory are discussed.

  15. Short-term retention of pictures and words: evidence for dual coding systems.

    Science.gov (United States)

    Pellegrino, J W; Siegel, A W; Dhawan, M

    1975-03-01

    The recall of picture and word triads was examined in three experiments that manipulated the type of distraction in a Brown-Peterson short-term retention task. In all three experiments recall of pictures was superior to words under auditory distraction conditions. Visual distraction produced high performance levels with both types of stimuli, whereas combined auditory and visual distraction significantly reduced picture recall without further affecting word recall. The results were interpreted in terms of the dual coding hypothesis and indicated that pictures are encoded into separate visual and acoustic processing systems while words are primarily acoustically encoded.

  16. Children with reading disability show brain differences in effective connectivity for visual, but not auditory word comprehension.

    Directory of Open Access Journals (Sweden)

    Li Liu

    2010-10-01

    Full Text Available Previous literature suggests that those with reading disability (RD have more pronounced deficits during semantic processing in reading as compared to listening comprehension. This discrepancy has been supported by recent neuroimaging studies showing abnormal activity in RD during semantic processing in the visual but not in the auditory modality. Whether effective connectivity between brain regions in RD could also show this pattern of discrepancy has not been investigated.Children (8- to 14-year-olds were given a semantic task in the visual and auditory modality that required an association judgment as to whether two sequentially presented words were associated. Effective connectivity was investigated using Dynamic Causal Modeling (DCM on functional magnetic resonance imaging (fMRI data. Bayesian Model Selection (BMS was used separately for each modality to find a winning family of DCM models separately for typically developing (TD and RD children. BMS yielded the same winning family with modulatory effects on bottom-up connections from the input regions to middle temporal gyrus (MTG and inferior frontal gyrus(IFG with inconclusive evidence regarding top-down modulations. Bayesian Model Averaging (BMA was thus conducted across models in this winning family and compared across groups. The bottom-up effect from the fusiform gyrus (FG to MTG rather than the top-down effect from IFG to MTG was stronger in TD compared to RD for the visual modality. The stronger bottom-up influence in TD was only evident for related word pairs but not for unrelated pairs. No group differences were noted in the auditory modality.This study revealed a modality-specific deficit for children with RD in bottom-up effective connectivity from orthographic to semantic processing regions. There were no group differences in connectivity from frontal regions, suggesting that the core deficit in RD is not in top-down modulation.

  17. Linguistic processing in visual and modality-nonspecific brain areas: PET recordings during selective attention.

    Science.gov (United States)

    Vorobyev, Victor A; Alho, Kimmo; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Roudas, Marina S; Rutkovskaya, Julia M; Tervaniemi, Mari; Van Zuijen, Titia L; Näätänen, Risto

    2004-07-01

    Positron emission tomography (PET) was used to investigate the neural basis of selective processing of linguistic material during concurrent presentation of multiple stimulus streams ("cocktail-party effect"). Fifteen healthy right-handed adult males were to attend to one of three simultaneously presented messages: one presented visually, one to the left ear, and one to the right ear. During the control condition, subjects attended to visually presented consonant letter strings and ignored auditory messages. This paper reports the modality-nonspecific language processing and visual word-form processing, whereas the auditory attention effects have been reported elsewhere [Cogn. Brain Res. 17 (2003) 201]. The left-hemisphere areas activated by both the selective processing of text and speech were as follows: the inferior prefrontal (Brodmann's area, BA 45, 47), anterior temporal (BA 38), posterior insular (BA 13), inferior (BA 20) and middle temporal (BA 21), occipital (BA 18/30) cortices, the caudate nucleus, and the amygdala. In addition, bilateral activations were observed in the medial occipito-temporal cortex and the cerebellum. Decreases of activation during both text and speech processing were found in the parietal (BA 7, 40), frontal (BA 6, 8, 44) and occipito-temporal (BA 37) regions of the right hemisphere. Furthermore, the present data suggest that the left occipito-temporal cortex (BA 18, 20, 37, 21) can be subdivided into three functionally distinct regions in the posterior-anterior direction on the basis of their activation during attentive processing of sublexical orthography, visual word form, and supramodal higher-level aspects of language.

  18. Rapid interactions between lexical semantic and word form analysis during word recognition in context: evidence from ERPs.

    Science.gov (United States)

    Kim, Albert; Lai, Vicky

    2012-05-01

    We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."

  19. Neurophysiological evidence for the interplay of speech segmentation and word-referent mapping during novel word learning.

    Science.gov (United States)

    François, Clément; Cunillera, Toni; Garcia, Enara; Laine, Matti; Rodriguez-Fornells, Antoni

    2017-04-01

    Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. How does interhemispheric communication in visual word recognition work? Deciding between early and late integration accounts of the split fovea theory.

    Science.gov (United States)

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J

    2009-02-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.

  1. The low-frequency encoding disadvantage: Word frequency affects processing demands.

    Science.gov (United States)

    Diana, Rachel A; Reder, Lynne M

    2006-07-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in addition to the advantage of low-frequency words at retrieval, there is a low-frequency disadvantage during encoding. That is, low-frequency words require more processing resources to be encoded episodically than high-frequency words. Under encoding conditions in which processing resources are limited, low-frequency words show a larger decrement in recognition than high-frequency words. Also, studying items (pictures and words of varying frequencies) along with low-frequency words reduces performance for those stimuli. Copyright 2006 APA, all rights reserved.

  2. Imaging When Acting: Picture but Not Word Cues Induce Action-Related Biases of Visual Attention

    Science.gov (United States)

    Wykowska, Agnieszka; Hommel, Bernhard; Schubö, Anna

    2012-01-01

    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters. PMID:23087656

  3. Imaging when acting: picture but not word cues induce action-related biases of visual attention.

    Science.gov (United States)

    Wykowska, Agnieszka; Hommel, Bernhard; Schubö, Anna

    2012-01-01

    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing - an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters.

  4. Words don't come easy

    DEFF Research Database (Denmark)

    Starrfelt, Randi

    of reading, and with the use of functional imaging techniques. Extant evidence for (and against) cerebral specialization for visual word recognition is briefly reviewed and found inconclusive.                       Study I is a case study of a patient with a very selective alexia and agraphia affecting...... and object processing, may explain the pattern of activations found in our and other functional imaging studies of the visual word form area.                       Study III reports a patient (NN) with pure alexia. NN is not impaired in object recognition, but his deficit(s) affects processing speed...... reading and writing of letters and words but not numbers. This study raised questions of "where" in the cognitive system such a deficit may arise, and whether it can be attributed to a deficit in a system specialized for reading or letter knowledge. The following studies investigated these questions...

  5. Office automation: a look beyond word processing

    OpenAIRE

    DuBois, Milan Ephriam, Jr.

    1983-01-01

    Approved for public release; distribution is unlimited Word processing was the first of various forms of office automation technologies to gain widespread acceptance and usability in the business world. For many, it remains the only form of office automation technology. Office automation, however, is not just word processing, although it does include the function of facilitating and manipulating text. In reality, office automation is not one innovation, or one office system, or one tech...

  6. BioWord: A sequence manipulation suite for Microsoft Word

    Directory of Open Access Journals (Sweden)

    Anzaldi Laura J

    2012-06-01

    Full Text Available Abstract Background The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. Results BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. Conclusions BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms.

  7. BioWord: A sequence manipulation suite for Microsoft Word

    Science.gov (United States)

    2012-01-01

    Background The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. Results BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. Conclusions BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms. PMID:22676326

  8. BioWord: a sequence manipulation suite for Microsoft Word.

    Science.gov (United States)

    Anzaldi, Laura J; Muñoz-Fernández, Daniel; Erill, Ivan

    2012-06-07

    The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms.

  9. Picturing words? Sensorimotor cortex activation for printed words in child and adult readers

    Science.gov (United States)

    Dekker, Tessa M.; Mareschal, Denis; Johnson, Mark H.; Sereno, Martin I.

    2014-01-01

    Learning to read involves associating abstract visual shapes with familiar meanings. Embodiment theories suggest that word meaning is at least partially represented in distributed sensorimotor networks in the brain (Barsalou, 2008; Pulvermueller, 2013). We explored how reading comprehension develops by tracking when and how printed words start activating these “semantic” sensorimotor representations as children learn to read. Adults and children aged 7–10 years showed clear category-specific cortical specialization for tool versus animal pictures during a one-back categorisation task. Thus, sensorimotor representations for these categories were in place at all ages. However, co-activation of these same brain regions by the visual objects’ written names was only present in adults, even though all children could read and comprehend all presented words, showed adult-like task performance, and older children were proficient readers. It thus takes years of training and expert reading skill before spontaneous processing of printed words’ sensorimotor meanings develops in childhood. PMID:25463817

  10. Positive schizotypy scores correlate with left visual field interference for negatively valenced emotional words: A lateralized emotional Stroop study.

    Science.gov (United States)

    Van Strien, Jan W; Van Kampen, Dirk

    2009-10-30

    Fourteen men scoring high and 14 men scoring low on a positive schizotypy scale participated in a lateralized emotional Stroop task. Vocal reaction times for color naming of neutral, positive and negative emotional words were recorded. Across participants, the color naming of neutral and emotional words was slightly faster to right than to left visual field presentations. In men with high scores on positive schizotypy, the presentation of negative words to the left visual field (right hemisphere) resulted in significant affective interference with color naming, which was significantly larger than in men with low scores. Correlational analysis also showed that positive schizotypy was significantly associated with emotional interference in response to LVF negative words. The outcome is discussed in terms of right hemispheric engagement in negative emotions in high positive schizotypic men.

  11. Visual half-field presentations of incongruent color words: effects of gender and handedness.

    Science.gov (United States)

    Franzon, M; Hugdahl, K

    1986-09-01

    Right-handed (dextral) and left-handed (sinistral) males and females (N = 15) were compared for language lateralization in a visual half-field (VHF) incongruent color-words paradigm. The paradigm consists of repeated brief (less than 200 msec) presentations of color-words written in an incongruent color. Presentations are either to the right or to the left of center fixation. The task of the subject is to report the color the word is written in on each trial, ignoring the color-word. Color-bars and congruent color-words were used as control stimuli. Vocal reaction time (VRT) and error frequency were used as dependent measures. The logic behind the paradigm is that incongruent color-words should lead to a greater cognitive conflict when presented in the half-field contralateral to the dominant hemisphere. The results showed significantly longer VRTs in the right half-field for the dextral subjects. Furthermore, significantly more errors were observed in the male dextral group when the incongruent stimuli were presented in the right half-field. There was a similar trend in the data for the sinistral males. No differences between half-fields were observed for the female groups. It is concluded that the present results strengthen previous findings from our laboratory (Hugdahl and Franzon, 1985) that the incongruent color-words paradigm is a useful non-invasive technique for the study of lateralization in the intact brain.

  12. ‘Distracters’ do not always distract: Visual working memory for angry faces is enhanced by incidental emotional words.

    Directory of Open Access Journals (Sweden)

    Margaret Cecilia Jackson

    2012-10-01

    Full Text Available We are often required to filter out distraction in order to focus on a primary task during which working memory (WM is engaged. Previous research has shown that negative versus neutral distracters presented during a visual WM maintenance period significantly impair memory for neutral information. However, the contents of WM are often also emotional in nature. The question we address here is how incidental information might impact upon visual WM when both this and the memory items contain emotional information. We presented emotional versus neutral words during the maintenance interval of an emotional visual WM faces task. Participants encoded two angry or happy faces into WM, and several seconds into a 9 second maintenance period a negative, positive, or neutral word was flashed on the screen three times. A single neutral test face was presented for retrieval with a face identity that was either present or absent in the preceding study array. WM for angry face identities was significantly better when an emotional (negative or positive versus neutral (or no word was presented. In contrast, WM for happy face identities was not significantly affected by word valence. These findings suggest that the presence of emotion within an intervening stimulus boosts the emotional value of threat-related information maintained in visual WM and thus improves performance. In addition, we show that incidental events that are emotional in nature do not always distract from an ongoing WM task.

  13. Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet

    Directory of Open Access Journals (Sweden)

    Paweł Kędzia

    2015-06-01

    Full Text Available Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet The paper offers a critical evaluation of the power and usefulness of an automatic prompt system based on the extended Relaxation Labelling algorithm in the process of (manual mapping plWordNet on Princeton WordNet. To this end the results of manual mapping – that is inter-lingual relations between plWN and PWN synsets – are juxtaposed with the automatic prompts that were generated for the source language synsets to be mapped. We check the number and type of inter-lingual relations introduced on the basis of automatic prompts and the distance of the respective prompt synsets from the actual target language synsets.

  14. Shared perceptual processes in phoneme and word perception: Evidence from aphasia

    Directory of Open Access Journals (Sweden)

    Heather Raye Dial

    2014-04-01

    Replicating previous studies, performance on the two word recognition tasks without closely matched distractors (WAB and PWM was at ceiling for some subjects with impairments on consonant discrimination (see Figures 1a/1b. However, as shown in Figures 1c/1d, for word processing tasks matched in phonological discriminability to the consonant discrimination task, scores on consonant discrimination and word processing were highly correlated, and no individual demonstrated substantially better performance on word than phoneme perception. One patient demonstrated worse performance on lexical decision (d’ = .21 than phoneme perception (d’ = 1.72, which can be attributed to impaired lexical or semantic processing. These data argue against the hypothesis that phoneme and word perception rely on different perceptual processes/routes for processing, and instead indicate that word perception depends on perception of sublexical units.

  15. Semantic processing of unattended parafoveal words.

    Science.gov (United States)

    Di Pace, E; Longoni, A M; Zoccolotti, P

    1991-08-01

    The influence that a context word presented either foveally or parafoveally, may exert on the processing of a subsequent target word was studied in a semantic decision task. Fourteen subjects participated in the experiment. They were presented with word-nonword pairs (prime). One member of the pair (which the subjects had to attend to) appeared centrally, the other parafoveally. The prime was followed by a target at two inter-stimulus intervals (ISI; 200 and 2000 msec). The word stimulus of the pair could be semantically related or unrelated to the target. The subjects' task was to classify the target as animal or not animal by pressing one of two buttons as quickly as possible. When the target word was semantically associated with the foveal (attended) word the reaction times were faster for both ISIs; when it was associated with the parafoveal (unattended) word in the prime pair, there were facilitatory effects only in the short ISI condition. A second experiment was run in order to evaluate the possibility that the obtained results were due to identification of the parafoveal stimulus. The same prime-target pairs of experiment 1 (without the target stimuli) were used. The prime-target pairs were presented to fourteen subjects who were requested to name the foveal (attended) stimulus and subsequently, if possible, the parafoveal (unattended) one. Even in this condition, percentage of identification of the unattended word was only 15%, suggesting that previous findings were not due to identification of unattended stimuli. Results are discussed in relation to Posner and Snyder's (1975) dual coding theory.

  16. Visual thinking in action: visualizations as used on whiteboards.

    Science.gov (United States)

    Walny, Jagoda; Carpendale, Sheelagh; Riche, Nathalie Henry; Venolia, Gina; Fawcett, Philip

    2011-12-01

    While it is still most common for information visualization researchers to develop new visualizations from a data- or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design. © 2011 IEEE

  17. The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood

    Science.gov (United States)

    Havy, Mélanie; Foroud, Afra; Fais, Laurel; Werker, Janet F.

    2017-01-01

    Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in…

  18. When a hit sounds like a kiss : An electrophysiological exploration of semantic processing in visual narrative

    NARCIS (Netherlands)

    Manfredi, Mirella; Cohn, Neil; Kutas, Marta

    2017-01-01

    Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual

  19. Cascaded processing in written compound word production

    Directory of Open Access Journals (Sweden)

    Raymond eBertram

    2015-04-01

    Full Text Available In this study we investigated the intricate interplay between central linguistic processing and peripheral motor processes during typewriting. Participants had to typewrite two-constituent (noun-noun Finnish compounds in response to picture presentation while their typing behavior was registered. As dependent measures we used writing onset time to assess what processes were completed before writing and inter-key intervals to assess what processes were going on during writing. It was found that writing onset time was determined by whole word frequency rather than constituent frequencies, indicating that compound words are retrieved as whole orthographic units before writing is initiated. In addition, we found that the length of the first syllable also affects writing onset time, indicating that the first syllable is fully prepared before writing commences. The inter-key interval results showed that linguistic planning is not fully ready before writing, but cascades into the motor execution phase. More specifically, inter-key intervals were largest at syllable and morpheme boundaries, supporting the view that additional linguistic planning takes place at these boundaries. Bigram and trigram frequency also affected inter-key intervals with shorter intervals corresponding to higher frequencies. This can be explained by stronger memory traces for frequently co-occurring letter sequences in the motor memory for typewriting. These frequency effects were even larger in the second than in the first constituent, indicating that low-level motor memory starts to become more important during the course of writing compound words. We discuss our results in the light of current models of morphological processing and written word production.

  20. Cascaded processing in written compound word production.

    Science.gov (United States)

    Bertram, Raymond; Tønnessen, Finn Egil; Strömqvist, Sven; Hyönä, Jukka; Niemi, Pekka

    2015-01-01

    In this study we investigated the intricate interplay between central linguistic processing and peripheral motor processes during typewriting. Participants had to typewrite two-constituent (noun-noun) Finnish compounds in response to picture presentation while their typing behavior was registered. As dependent measures we used writing onset time to assess what processes were completed before writing and inter-key intervals to assess what processes were going on during writing. It was found that writing onset time was determined by whole word frequency rather than constituent frequencies, indicating that compound words are retrieved as whole orthographic units before writing is initiated. In addition, we found that the length of the first syllable also affects writing onset time, indicating that the first syllable is fully prepared before writing commences. The inter-key interval results showed that linguistic planning is not fully ready before writing, but cascades into the motor execution phase. More specifically, inter-key intervals were largest at syllable and morpheme boundaries, supporting the view that additional linguistic planning takes place at these boundaries. Bigram and trigram frequency also affected inter-key intervals with shorter intervals corresponding to higher frequencies. This can be explained by stronger memory traces for frequently co-occurring letter sequences in the motor memory for typewriting. These frequency effects were even larger in the second than in the first constituent, indicating that low-level motor memory starts to become more important during the course of writing compound words. We discuss our results in the light of current models of morphological processing and written word production.

  1. Effects of Grammatical Categories on Children's Visual Language Processing: Evidence from Event-Related Brain Potentials

    Science.gov (United States)

    Weber-Fox, Christine; Hart, Laura J.; Spruill, John E., III

    2006-01-01

    This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and…

  2. Measuring, Predicting and Visualizing Short-Term Change in Word Representation and Usage in VKontakte Social Network

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Ian B.; Arendt, Dustin L.; Bell, Eric B.; Volkova, Svitlana

    2017-05-17

    Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.

  3. Grasping hand verbs: oscillatory beta and alpha correlates of action-word processing.

    Directory of Open Access Journals (Sweden)

    Valentina Niccolai

    Full Text Available The grounded cognition framework proposes that sensorimotor brain areas, which are typically involved in perception and action, also play a role in linguistic processing. We assessed oscillatory modulation during visual presentation of single verbs and localized cortical motor regions by means of isometric contraction of hand and foot muscles. Analogously to oscillatory activation patterns accompanying voluntary movements, we expected a somatotopically distributed suppression of beta and alpha frequencies in the motor cortex during processing of body-related action verbs. Magnetoencephalographic data were collected during presentation of verbs that express actions performed using the hands (H or feet (F. Verbs denoting no bodily movement (N were used as a control. Between 150 and 500 msec after visual word onset, beta rhythms were suppressed in H and F in comparison with N in the left hemisphere. Similarly, alpha oscillations showed left-lateralized power suppression in the H-N contrast, although at a later stage. The cortical oscillatory activity that typically occurs during voluntary movements is therefore found to somatotopically accompany the processing of body-related verbs. The combination of a localizer task with the oscillatory investigation applied to verb reading as in the present study provides further methodological possibilities of tracking language processing in the brain.

  4. The Visual Word Form Area remains in the dominant hemisphere for language in late-onset left occipital lobe epilepsies: A postsurgery analysis of two cases.

    Science.gov (United States)

    Lopes, Ricardo; Nunes, Rita Gouveia; Simões, Mário Rodrigues; Secca, Mário Forjaz; Leal, Alberto

    2015-05-01

    Automatic recognition of words from letter strings is a critical processing step in reading that is lateralized to the left-hemisphere middle fusiform gyrus in the so-called Visual Word Form Area (VWFA). Surgical lesions in this location can lead to irreversible alexia. Very early left hemispheric lesions can lead to transfer of the VWFA to the nondominant hemisphere, but it is currently unknown if this capability is preserved in epilepsies developing after reading acquisition. In this study, we aimed to determine the lateralization of the VWFA in late-onset left inferior occipital lobe epilepsies and also the effect of surgical disconnection from the adjacent secondary visual areas. Two patients with focal epilepsies with onset near the VWFA underwent to surgery for epilepsy, with sparing of this area. Neuropsychology evaluations were performed before and after surgery, as well as quantitative evaluation of the speed of word reading. Comparison of the surgical localization of the lesion, with the BOLD activation associated with the contrast of words-strings, was performed, as well as a study of the associated main white fiber pathways using diffusion-weighted imaging. Neither of the patients developed alexia after surgery (similar word reading speed before and after surgery) despite the fact that the inferior occipital surgical lesions reached the neighborhood (less than 1cm) of the VWFA. Surgeries partly disconnected the VWFA from left secondary visual areas, suggesting that pathways connecting to the posterior visual ventral stream were severely affected but did not induce alexia. The anterior and superior limits of the resection suggest that the critical connection between the VWFA and the Wernicke's Angular Gyrus cortex was not affected, which is supported by the detection of this tract with probabilistic tractography. Left occipital lobe epilepsies developing after reading acquisition did not produce atypical localizations of the VWFA, even with foci in the

  5. The Developmental Lexicon Project: A behavioral database to investigate visual word recognition across the lifespan.

    Science.gov (United States)

    Schröter, Pauline; Schroeder, Sascha

    2017-12-01

    With the Developmental Lexicon Project (DeveL), we present a large-scale study that was conducted to collect data on visual word recognition in German across the lifespan. A total of 800 children from Grades 1 to 6, as well as two groups of younger and older adults, participated in the study and completed a lexical decision and a naming task. We provide a database for 1,152 German words, comprising behavioral data from seven different stages of reading development, along with sublexical and lexical characteristics for all stimuli. The present article describes our motivation for this project, explains the methods we used to collect the data, and reports analyses on the reliability of our results. In addition, we explored developmental changes in three marker effects in psycholinguistic research: word length, word frequency, and orthographic similarity. The database is available online.

  6. National Study of Word Processing Installations in Selected Business Organizations. A Report on the National Word Processing Research Study of Delta Pi Epsilon.

    Science.gov (United States)

    Scriven, Jolene D.; And Others

    A study was conducted (1) to determine current practices in word processing installations in selected organizations throughout the United States, and (2) to ascertain anticipated future developments in word processing as well as to provide recommendations for educational institutions that prepare workers for business offices. Seven interview…

  7. Relative speed of processing determines color-word contingency learning.

    Science.gov (United States)

    Forrin, Noah D; MacLeod, Colin M

    2017-10-01

    In three experiments, we tested a relative-speed-of-processing account of color-word contingency learning, a phenomenon in which color identification responses to high-contingency stimuli (words that appear most often in particular colors) are faster than those to low-contingency stimuli. Experiment 1 showed equally large contingency-learning effects whether responding was to the colors or to the words, likely due to slow responding to both dimensions because of the unfamiliar mapping required by the key press responses. For Experiment 2, participants switched to vocal responding, in which reading words is considerably faster than naming colors, and we obtained a contingency-learning effect only for color naming, the slower dimension. In Experiment 3, previewing the color information resulted in a reduced contingency-learning effect for color naming, but it enhanced the contingency-learning effect for word reading. These results are all consistent with contingency learning influencing performance only when the nominally irrelevant feature is faster to process than the relevant feature, and therefore are entirely in accord with a relative-speed-of-processing explanation.

  8. Information properties of morphologically complex words modulate brain activity during word reading.

    Science.gov (United States)

    Hakala, Tero; Hultén, Annika; Lehtonen, Minna; Lagus, Krista; Salmelin, Riitta

    2018-06-01

    Neuroimaging studies of the reading process point to functionally distinct stages in word recognition. Yet, current understanding of the operations linked to those various stages is mainly descriptive in nature. Approaches developed in the field of computational linguistics may offer a more quantitative approach for understanding brain dynamics. Our aim was to evaluate whether a statistical model of morphology, with well-defined computational principles, can capture the neural dynamics of reading, using the concept of surprisal from information theory as the common measure. The Morfessor model, created for unsupervised discovery of morphemes, is based on the minimum description length principle and attempts to find optimal units of representation for complex words. In a word recognition task, we correlated brain responses to word surprisal values derived from Morfessor and from other psycholinguistic variables that have been linked with various levels of linguistic abstraction. The magnetoencephalography data analysis focused on spatially, temporally and functionally distinct components of cortical activation observed in reading tasks. The early occipital and occipito-temporal responses were correlated with parameters relating to visual complexity and orthographic properties, whereas the later bilateral superior temporal activation was correlated with whole-word based and morphological models. The results show that the word processing costs estimated by the statistical Morfessor model are relevant for brain dynamics of reading during late processing stages. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  9. The right posterior inferior frontal gyrus contributes to phonological word decisions in the healthy brain

    DEFF Research Database (Denmark)

    Hartwigsen, Gesa; Price, Cathy J; Baumgaertner, Annette

    2010-01-01

    There is consensus that the left hemisphere plays a dominant role in language processing, but functional imaging studies have shown that the right as well as the left posterior inferior frontal gyri (pIFG) are activated when healthy right-handed individuals make phonological word decisions. Here we...... used online transcranial magnetic stimulation (TMS) to examine the functional relevance of the right pIFG for auditory and visual phonological decisions. Healthy right-handed individuals made phonological or semantic word judgements on the same set of auditorily and visually presented words while......IFG impaired reaction times and accuracy of phonological but not semantic decisions for visually and auditorily presented words. TMS over left, right or bilateral pIFG disrupted phonological processing to a similar degree. In a follow-up experiment, the intensity threshold for delaying phonological judgements...

  10. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    Science.gov (United States)

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  11. The Influence of Visual Word Form in Reading: Single Case Study of an Arabic Patient with Deep Dyslexia

    Science.gov (United States)

    Boumaraf, Assia; Macoir, Joël

    2016-01-01

    Deep dyslexia is a written language disorder characterized by poor reading of non-words, and advantage for concrete over abstract words with production of semantic, visual and morphological errors. In this single case study of an Arabic patient with input deep dyslexia, we investigated the impact of graphic features of Arabic on manifestations of…

  12. The Time Course of Incremental Word Processing during Chinese Reading

    Science.gov (United States)

    Zhou, Junyi; Ma, Guojie; Li, Xingshan; Taft, Marcus

    2018-01-01

    In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words…

  13. Cross-modal priming facilitates production of low imageability word strings in a case of deep-phonological dysphasia

    Directory of Open Access Journals (Sweden)

    Laura Mary Mccarthy

    2014-04-01

    Full Text Available Introduction. Characteristics of repetition in deep-phonological dysphasia include an inability to repeat nonwords, semantic errors in single word repetition (deep dysphasia and in multiple word repetition (phonological dysphasia and better repetition of highly imageable words (Wilshire & Fisher, 2004; Ablinger et al., 2008. Additionally, visual processing of words is often more accurate than auditory processing of words (Howard & Franklin, 1988. We report a case study of LT who incurred a LCVA on 10/3/2009. She initially presented with deep dysphasia and near normal word reading. When enrolled in this study, approximately 24 months post-onset, she presented with phonological dysphasia. We investigated the hypothesis that (1 reproduction of a word string would be more accurate when preceded by a visual presentation of the word string compared to two auditory presentations of the word string, and (2 that this facilitative boost would be observed only for strings of low image words, consistent with the imageability effect in repetition. Method. Three-word strings were created in four conditions which varied the frequency (F and imageability (I of words within a string: HiF-HiI, LoF-HiI, HiF-LoI, LoF-LoI. All strings were balanced for total syllable length and were unrelated semantically and phonologically. The dependent variable was as accuracy of repetition of each word within a string. We created six modality prime conditions each with 24 strings drawn equally from the four frequency-imageability types, randomized within modality condition: Auditory Once (AudOnce – string presented auditorily one time; Auditory Twice (AudAud – string presented auditorily two consecutive times; Visual Once (VisOnce – string presented visually one time; Visual Twice (VisVis – string presented visually two consecutive times; Auditory then Visual (AudVis – string presented once auditorily, then a second time visually; Visual then Auditory (VisAud

  14. The effects of sad prosody on hemispheric specialization for words processing.

    Science.gov (United States)

    Leshem, Rotem; Arzouan, Yossi; Armony-Sivan, Rinat

    2015-06-01

    This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170 ms) and late (240-260 ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Differential emotional processing in concrete and abstract words.

    Science.gov (United States)

    Yao, Bo; Keitel, Anne; Bruce, Gillian; Scott, Graham G; O'Donnell, Patrick J; Sereno, Sara C

    2018-02-12

    Emotion (positive and negative) words are typically recognized faster than neutral words. Recent research suggests that emotional valence, while often treated as a unitary semantic property, may be differentially represented in concrete and abstract words. Studies that have explicitly examined the interaction of emotion and concreteness, however, have demonstrated inconsistent patterns of results. Moreover, these findings may be limited as certain key lexical variables (e.g., familiarity, age of acquisition) were not taken into account. We investigated the emotion-concreteness interaction in a large-scale, highly controlled lexical decision experiment. A 3 (Emotion: negative, neutral, positive) × 2 (Concreteness: abstract, concrete) design was used, with 45 items per condition and 127 participants. We found a significant interaction between emotion and concreteness. Although positive and negative valenced words were recognized faster than neutral words, this emotion advantage was significantly larger in concrete than in abstract words. We explored potential contributions of participant alexithymia level and item imageability to this interactive pattern. We found that only word imageability significantly modulated the emotion-concreteness interaction. While both concrete and abstract emotion words are advantageously processed relative to comparable neutral words, the mechanisms of this facilitation are paradoxically more dependent on imageability in abstract words. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Evidence for simultaneous syntactic processing of multiple words during reading.

    Directory of Open Access Journals (Sweden)

    Joshua Snell

    Full Text Available A hotly debated issue in reading research concerns the extent to which readers process parafoveal words, and how parafoveal information might influence foveal word recognition. We investigated syntactic word processing both in sentence reading and in reading isolated foveal words when these were flanked by parafoveal words. In Experiment 1 we found a syntactic parafoveal preview benefit in sentence reading, meaning that fixation durations on target words were decreased when there was a syntactically congruent preview word at the target location (n during the fixation on the pre-target (n-1. In Experiment 2 we used a flanker paradigm in which participants had to classify foveal target words as either noun or verb, when those targets were flanked by syntactically congruent or incongruent words (stimulus on-time 170 ms. Lower response times and error rates in the congruent condition suggested that higher-order (syntactic information can be integrated across foveal and parafoveal words. Although higher-order parafoveal-on-foveal effects have been elusive in sentence reading, results from our flanker paradigm show that the reading system can extract higher-order information from multiple words in a single glance. We propose a model of reading to account for the present findings.

  17. Effects of context and individual differences on the processing of taboo words.

    Science.gov (United States)

    Christianson, Kiel; Zhou, Peiyun; Palmer, Cassie; Raizen, Adina

    2017-07-01

    Previous studies suggest that taboo words are special in regards to language processing. Findings from the studies have led to the formation of two theories, global resource theory and binding theory, of taboo word processing. The current study investigates how readers process taboo words embedded in sentences during silent reading. In two experiments, measures collected include eye movement data, accuracy and reaction time measures for recalling probe words within the sentences, and individual differences in likelihood of being offended by taboo words. Although certain aspects of the results support both theories, as the likelihood of a person being offended by a taboo word influenced some measures, neither theory sufficiently predicts or describes the effects observed. The results are interpreted as evidence that processing effects ascribed to taboo words are largely, but not completely, attributable to the context in which they are used and the individual attitudes of the people who hear/read them. The results also demonstrate the importance of investigating taboo words in naturalistic language processing paradigms. A revised theory of taboo word processing is proposed that incorporates both global resource theory and binding theory along with the sociolinguistic factors and individual differences that largely drive the effects observed here. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Processing Electromyographic Signals to Recognize Words

    Science.gov (United States)

    Jorgensen, C. C.; Lee, D. D.

    2009-01-01

    A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.

  19. Activation of words with phonological overlap

    Directory of Open Access Journals (Sweden)

    Claudia K. Friedrich

    2013-08-01

    Full Text Available Multiple lexical representations overlapping with the input (cohort neighbors are temporarily activated in the listener’s mental lexicon when speech unfolds in time. Activation for cohort neighbors appears to rapidly decline as soon as there is mismatch with the input. However, it is a matter of debate whether or not they are completely excluded from further processing. We recorded behavioral data and event-related brain potentials (ERPs in auditory-visual word onset priming during a lexical decision task. As primes we used the first two syllables of spoken German words. In a carrier word condition, the primes were extracted from spoken versions of the target words (ano-ANORAK 'anorak'. In a cohort neighbor condition, the primes were taken from words that overlap with the target word up to the second nucleus (ana- taken from ANANAS 'pineapple'. Relative to a control condition, where primes and targets were unrelated, lexical decision responses for cohort neighbors were delayed. This reveals that cohort neighbors are disfavored by the decision processes at the behavioral front end. In contrast, left-anterior ERPs reflected long-lasting facilitated processing of cohort neighbors. We interpret these results as evidence for extended parallel processing of cohort neighbors. That is, in parallel to the preparation and elicitation of delayed lexical decision responses to cohort neighbors, aspects of the processing system appear to keep track of those less efficient candidates.

  20. Iconic Factors and Language Word Order

    Science.gov (United States)

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  1. Visual Motion Perception and Visual Attentive Processes.

    Science.gov (United States)

    1988-04-01

    88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical

  2. The word processing deficit in semantic dementia: all categories are equal, but some categories are more equal than others.

    Science.gov (United States)

    Pulvermüller, Friedemann; Cooper-Pye, Elisa; Dine, Clare; Hauk, Olaf; Nestor, Peter J; Patterson, Karalyn

    2010-09-01

    It has been claimed that semantic dementia (SD), the temporal variant of fronto-temporal dementia, is characterized by an across-the-board deficit affecting all types of conceptual knowledge. We here confirm this generalized deficit but also report differential degrees of impairment in processing specific semantic word categories in a case series of SD patients (N = 11). Within the domain of words with strong visually grounded meaning, the patients' lexical decision accuracy was more impaired for color-related than for form-related words. Likewise, within the domain of action verbs, the patients' performance was worse for words referring to face movements and speech acts than for words semantically linked to actions performed with the hand and arm. Psycholinguistic properties were matched between the stimulus groups entering these contrasts; an explanation for the differential degrees of impairment must therefore involve semantic features of the words in the different conditions. Furthermore, this specific pattern of deficits cannot be captured by classic category distinctions such as nouns versus verbs or living versus nonliving things. Evidence from previous neuroimaging research indicates that color- and face/speech-related words, respectively, draw most heavily on anterior-temporal and inferior-frontal areas, the structures most affected in SD. Our account combines (a) the notion of an anterior-temporal amodal semantic "hub" to explain the profound across-the-board deficit in SD word processing, with (b) a semantic topography model of category-specific circuits whose cortical distributions reflect semantic features of the words and concepts represented.

  3. Exposure to Androstenes Influences Processing of Emotional Words

    Directory of Open Access Journals (Sweden)

    Patrizia d'Ettorre

    2018-01-01

    Full Text Available There is evidence that human-produced androstenes affect attitudinal, emotional, and physiological states in a context-dependent manner, suggesting that they could be involved in modulating social interactions. For instance, androstadienone appears to increase attention specifically to emotional information. Most of the previous work focused on one or two androstenes. Here, we tested whether androstenes affect linguistic processing, using three different androstene compounds. Participants (90 women and 77 men performed a lexical decision task after being exposed to an androstene or to a control treatment (all compounds were applied on the philtrum. We tested effects on three categories of target words, varying in emotional valence: positive, competitive, and neutral words (e.g., hope, war, and century, respectively. Results show that response times were modulated by androstene treatment and by emotional valence of words. Androstenone, but not androstadienone and androstenol, significantly slowed down the reaction time to words with competitive valence. Moreover, men exposed to androstenol showed a significantly reduced error rate, although men tended to make more errors than women in general. This suggests that these androstenes modulate the processing of emotional words, namely some particular lexical emotional content may become more salient under the effect of androstenes.

  4. Looking and touching: what extant approaches reveal about the structure of early word knowledge.

    Science.gov (United States)

    Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret

    2015-09-01

    The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants' responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. © 2014 The Authors Developmental Science Published by John Wiley & Sons Ltd.

  5. Negative Transfer Effects on L2 Word Order Processing.

    Science.gov (United States)

    Erdocia, Kepa; Laka, Itziar

    2018-01-01

    Does first language (L1) word order affect the processing of non-canonical but grammatical syntactic structures in second language (L2) comprehension? In the present study, we test whether L1-Spanish speakers of L2-Basque process subject-verb-object (SVO) and object-verb-subject (OVS) non-canonical word order sentences of Basque in the same way as Basque native speakers. Crucially, while OVS orders are non-canonical in both Spanish and Basque, SVO is non-canonical in Basque but is the canonical word order in Spanish. Our electrophysiological results showed that the characteristics of L1 affect the processing of the L2 even at highly proficient and early-acquired bilingual populations. Specifically, in the non-native group, we observed a left anterior negativity-like component when comparing S and O at sentence initial position and a P600 when comparing those elements at sentence final position. Those results are similar of those reported by Casado et al. (2005) for native speakers of Spanish indicating that L2-Basque speakers rely in their L1-Spanish when processing SVO-OVS word order sentences. Our results favored the competition model (MacWhinney, 1997).

  6. Negative Transfer Effects on L2 Word Order Processing

    Directory of Open Access Journals (Sweden)

    Kepa Erdocia

    2018-03-01

    Full Text Available Does first language (L1 word order affect the processing of non-canonical but grammatical syntactic structures in second language (L2 comprehension? In the present study, we test whether L1-Spanish speakers of L2-Basque process subject–verb–object (SVO and object–verb–subject (OVS non-canonical word order sentences of Basque in the same way as Basque native speakers. Crucially, while OVS orders are non-canonical in both Spanish and Basque, SVO is non-canonical in Basque but is the canonical word order in Spanish. Our electrophysiological results showed that the characteristics of L1 affect the processing of the L2 even at highly proficient and early-acquired bilingual populations. Specifically, in the non-native group, we observed a left anterior negativity-like component when comparing S and O at sentence initial position and a P600 when comparing those elements at sentence final position. Those results are similar of those reported by Casado et al. (2005 for native speakers of Spanish indicating that L2-Basque speakers rely in their L1-Spanish when processing SVO–OVS word order sentences. Our results favored the competition model (MacWhinney, 1997.

  7. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis.

    Science.gov (United States)

    McBride, Dawn M; Anne Dosher, Barbara

    2002-09-01

    Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.

  8. A Dual-Route Model that Learns to Pronounce English Words

    Science.gov (United States)

    Remington, Roger W.; Miller, Craig S.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    This paper describes a model that learns to pronounce English words. Learning occurs in two modules: 1) a rule-based module that constructs pronunciations by phonetic analysis of the letter string, and 2) a whole-word module that learns to associate subsets of letters to the pronunciation, without phonetic analysis. In a simulation on a corpus of over 300 words the model produced pronunciation latencies consistent with the effects of word frequency and orthographic regularity observed in human data. Implications of the model for theories of visual word processing and reading instruction are discussed.

  9. THE INFLUENCE OF SYLLABIFICATION RULES IN L1 ON L2 WORD RECOGNITION.

    Science.gov (United States)

    Choi, Wonil; Nam, Kichun; Lee, Yoonhyoung

    2015-10-01

    Experiments with Korean learners of English and English monolinguals were conducted to examine whether knowledge of syllabification in the native language (Korean) affects the recognition of printed words in the non-native language (English). Another purpose of this study was to test whether syllables are the processing unit in Korean visual word recognition. In Experiment 1, 26 native Korean speakers and 19 native English speakers participated. In Experiment 2, 40 native Korean speakers participated. In two experiments, syllable length was manipulated based on the Korean syllabification rule and the participants performed a lexical decision task. Analyses of variance were performed for the lexical decision latencies and error rates in two experiments. The results from Korean learners of English showed that two-syllable words based on the Korean syllabification rule were recognized faster as words than various types of three-syllable words, suggesting that Korean learners of English exploited their L1 phonological knowledge in recognizing English words. The results of the current study also support the idea that syllables are a processing unit of Korean visual word recognition.

  10. Specifying theories of developmental dyslexia: a diffusion model analysis of word recognition

    NARCIS (Netherlands)

    Zeguers, M.H.T.; Snellings, P.; Tijms, J.; Weeda, W.D.; Tamboer, P.; Bexkens, A.; Huizenga, H.M.

    2011-01-01

    The nature of word recognition difficulties in developmental dyslexia is still a topic of controversy. We investigated the contribution of phonological processing deficits and uncertainty to the word recognition difficulties of dyslexic children by mathematical diffusion modeling of visual and

  11. Embedding Metadata and Other Semantics in Word Processing Documents

    Directory of Open Access Journals (Sweden)

    Peter Sefton

    2009-10-01

    Full Text Available This paper describes a technique for embedding document metadata, and potentially other semantic references inline in word processing documents, which the authors have implemented with the help of a software development team. Several assumptions underly the approach; It must be available across computing platforms and work with both Microsoft Word (because of its user base and OpenOffice.org (because of its free availability. Further the application needs to be acceptable to and usable by users, so the initial implementation covers only small number of features, which will only be extended after user-testing. Within these constraints the system provides a mechanism for encoding not only simple metadata, but for inferring hierarchical relationships between metadata elements from a ‘flat’ word processing file.The paper includes links to open source code implementing the techniques as part of a broader suite of tools for academic writing. This addresses tools and software, semantic web and data curation, integrating curation into research workflows and will provide a platform for integrating work on ontologies, vocabularies and folksonomies into word processing tools.

  12. Evaluating a Bilingual Text-Mining System with a Taxonomy of Key Words and Hierarchical Visualization for Understanding Learner-Generated Text

    Science.gov (United States)

    Kong, Siu Cheung; Li, Ping; Song, Yanjie

    2018-01-01

    This study evaluated a bilingual text-mining system, which incorporated a bilingual taxonomy of key words and provided hierarchical visualization, for understanding learner-generated text in the learning management systems through automatic identification and counting of matching key words. A class of 27 in-service teachers studied a course…

  13. Oscillatory brain dynamics associated with the automatic processing of emotion in words.

    Science.gov (United States)

    Wang, Lin; Bastiaansen, Marcel

    2014-10-01

    This study examines the automaticity of processing the emotional aspects of words, and characterizes the oscillatory brain dynamics that accompany this automatic processing. Participants read emotionally negative, neutral and positive nouns while performing a color detection task in which only perceptual-level analysis was required. Event-related potentials and time frequency representations were computed from the concurrently measured EEG. Negative words elicited a larger P2 and a larger late positivity than positive and neutral words, indicating deeper semantic/evaluative processing of negative words. In addition, sustained alpha power suppressions were found for the emotional compared to neutral words, in the time range from 500 to 1000ms post-stimulus. These results suggest that sustained attention was allocated to the emotional words, whereas the attention allocated to the neutral words was released after an initial analysis. This seems to hold even when the emotional content of the words is task-irrelevant. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Fast Mapping Across Time: Memory Processes Support Children's Retention of Learned Words

    Directory of Open Access Journals (Sweden)

    Haley eVlach

    2012-02-01

    Full Text Available Children's remarkable ability to map linguistic labels to objects in the world is referred to as fast mapping. The current study examined children's (N = 216 and adults’ (N = 54 retention of fast-mapped words over time (immediately, after a 1 week delay, and after a 1 month delay. The fast mapping literature often characterizes children's retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain general memory processes. Memory processes are critical to children's word learning and the role of one such process, forgetting, is discussed in detail—forgetting supports both word mapping and the generalization of words and categories.

  15. The Impact of Orthographic Connectivity on Visual Word Recognition in Arabic: A Cross-Sectional Study

    Science.gov (United States)

    Khateb, Asaid; Khateb-Abdelgani, Manal; Taha, Haitham Y.; Ibrahim, Raphiq

    2014-01-01

    This study aimed at assessing the effects of letters' connectivity in Arabic on visual word recognition. For this purpose, reaction times (RTs) and accuracy scores were collected from ninety-third, sixth and ninth grade native Arabic speakers during a lexical decision task, using fully connected (Cw), partially connected (PCw) and…

  16. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    Science.gov (United States)

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  17. Behavioral and electrophysiological signatures of word translation processes.

    Science.gov (United States)

    Jost, Lea B; Radman, Narges; Buetler, Karin A; Annoni, Jean-Marie

    2018-01-31

    Translation is a demanding process during which a message is analyzed, translated and communicated from one language to another. Despite numerous studies on translation mechanisms, the electrophysiological processes underlying translation with overt production remain largely unexplored. Here, we investigated how behavioral response patterns and spatial-temporal brain dynamics differ in a translation compared to a control within-language word-generation task. We also investigated how forward and backward translation differs on the behavioral and electrophysiological level. To address these questions, healthy late bilingual subjects performed a translation and a within-language control task while a 128-channel EEG was recorded. Behavioral data showed faster responses for translation compared to within-language word generation and faster responses for backward than forward translation. The ERP-analysis revealed stronger early ( processes for between than within word generation. Later (424-630ms) differences were characterized by distinct engagement of domain-general control networks, namely self-monitoring and lexical access interference. Language asymmetry effects occurred at a later stage (600ms), reflecting differences in conceptual processing characterized by a larger involvement of areas implicated in attention, arousal and awareness for forward versus backward translation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Reading Function and Content Words in Subtitled Videos

    Science.gov (United States)

    Szarkowska, Agnieszka; Łogińska, Maria

    2016-01-01

    In this study, we examined how function and content words are read in intra- and interlingual subtitles. We monitored eye movements of a group of 39 deaf, 27 hard of hearing, and 56 hearing Polish participants while they viewed English and Polish videos with Polish subtitles. We found that function words and short content words received less visual attention than longer content words, which was reflected in shorter dwell time, lower number of fixations, shorter first fixation duration, and lower subject hit count. Deaf participants dwelled significantly longer on function words than other participants, which may be an indication of their difficulty in processing this type of words. The findings are discussed in the context of classical reading research and applied research on subtitling. PMID:26681268

  19. The influence of print exposure on the body-object interaction effect in visual word recognition.

    Science.gov (United States)

    Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M

    2012-01-01

    We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.

  20. Comparing Local Descriptors and Bags of Visual Words to Deep Convolutional Neural Networks for Plant Recognition

    NARCIS (Netherlands)

    Pawara, Pornntiwa; Okafor, Emmanuel; Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2017-01-01

    The use of machine learning and computer vision methods for recognizing different plants from images has attracted lots of attention from the community. This paper aims at comparing local feature descriptors and bags of visual words with different classifiers to deep convolutional neural networks

  1. Surviving Blind Decomposition: A Distributional Analysis of the Time-Course of Complex Word Recognition

    Science.gov (United States)

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-01-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…

  2. Levels-of-processing effect on word recognition in schizophrenia.

    Science.gov (United States)

    Ragland, J Daniel; Moelter, Stephen T; McGrath, Claire; Hill, S Kristian; Gur, Raquel E; Bilker, Warren B; Siegel, Steven J; Gur, Ruben C

    2003-12-01

    Individuals with schizophrenia have difficulty organizing words semantically to facilitate encoding. This is commonly attributed to organizational rather than semantic processing limitations. By requiring participants to classify and encode words on either a shallow (e.g., uppercase/lowercase) or deep level (e.g., concrete/abstract), the levels-of-processing paradigm eliminates the need to generate organizational strategies. This paradigm was administered to 30 patients with schizophrenia and 30 healthy comparison subjects to test whether providing a strategy would improve patient performance. Word classification during shallow and deep encoding was slower and less accurate in patients. Patients also responded slowly during recognition testing and maintained a more conservative response bias following deep encoding; however, both groups showed a robust levels-of-processing effect on recognition accuracy, with unimpaired patient performance following both shallow and deep encoding. This normal levels-of-processing effect in the patient sample suggests that semantic processing is sufficiently intact for patients to benefit from organizational cues. Memory remediation efforts may therefore be most successful if they focus on teaching patients to form organizational strategies during initial encoding.

  3. Processing concrete words: fMRI evidence against a specific right-hemisphere involvement.

    Science.gov (United States)

    Fiebach, Christian J; Friederici, Angela D

    2004-01-01

    Behavioral, patient, and electrophysiological studies have been taken as support for the assumption that processing of abstract words is confined to the left hemisphere, whereas concrete words are processed also by right-hemispheric brain areas. These are thought to provide additional information from an imaginal representational system, as postulated in the dual-coding theory of memory and cognition. Here we report new event-related fMRI data on the processing of concrete and abstract words in a lexical decision task. While abstract words activated a subregion of the left inferior frontal gyrus (BA 45) more strongly than concrete words, specific activity for concrete words was observed in the left basal temporal cortex. These data as well as data from other neuroimaging studies reviewed here are not compatible with the assumption of a specific right-hemispheric involvement for concrete words. The combined findings rather suggest a revised view of the neuroanatomical bases of the imaginal representational system assumed in the dual-coding theory, at least with respect to word recognition.

  4. Reevaluating split-fovea processing in word recognition: hemispheric dominance, retinal location, and the word-nonword effect.

    Science.gov (United States)

    Jordan, Timothy R; Paterson, Kevin B; Kurtev, Stoyan

    2009-03-01

    Many studies have claimed that hemispheric projections are split precisely at the foveal midline and so hemispheric asymmetry affects word recognition right up to the point of fixation. To investigate this claim, four-letter words and nonwords were presented to the left or right of fixation, either close to fixation in foveal vision or farther from fixation in extrafoveal vision. Presentation accuracy was controlled using an eyetracker linked to a fixation-contingent display. Words presented foveally produced identical performance on each side of fixation, but words presented extrafoveally showed a clear left-hemisphere (LH) advantage. Nonwords produced no evidence of hemispheric asymmetry in any location. Foveal stimuli also produced an identical word-nonword effect on each side of fixation, whereas extrafoveal stimuli produced a word-nonword effect only for LH (not right-hemisphere) displays. These findings indicate that functional unilateral projections to contralateral hemispheres exist in extrafoveal locations but provide no evidence of a functional division in hemispheric processing at fixation.

  5. Fast Mapping Across Time: Memory Processes Support Children's Retention of Learned Words.

    Science.gov (United States)

    Vlach, Haley A; Sandhofer, Catherine M

    2012-01-01

    Children's remarkable ability to map linguistic labels to referents in the world is commonly called fast mapping. The current study examined children's (N = 216) and adults' (N = 54) retention of fast-mapped words over time (immediately, after a 1-week delay, and after a 1-month delay). The fast mapping literature often characterizes children's retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain-general memory processes. Memory processes are critical to children's word learning and the role of one such process, forgetting, is discussed in detail - forgetting supports extended mapping by promoting the memory and generalization of words and categories.

  6. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Science.gov (United States)

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  7. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Directory of Open Access Journals (Sweden)

    Gary Lupyan

    Full Text Available BACKGROUND: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. METHODOLOGY/PRINCIPAL FINDINGS: Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'. A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. CONCLUSIONS/SIGNIFICANCE: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  8. The Influence of Print Exposure on the Body-Object Interaction Effect in Visual Word Recognition

    Directory of Open Access Journals (Sweden)

    Dana eHansen

    2012-05-01

    Full Text Available We examined the influence of print exposure on the body-object interaction (BOI effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations (Is the word easily imageable?; Experiment 1 or phonological lexical decisions (Does the item sound like a real English word?; Experiment 2. The results from Experiment 1 showed that there was a larger facilitatory BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that a facilitatory BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.

  9. Word Order Processing in the Bilingual Brain

    Science.gov (United States)

    Saur, Dorothee; Baumgaertner, Annette; Moehring, Anja; Buchel, Christian; Bonnesen, Matthias; Rose, Michael; Musso, Mariachristina; Meisel, Jurgen M.

    2009-01-01

    One of the issues debated in the field of bilingualism is the question of a "critical period" for second language acquisition. Recent studies suggest an influence of age of onset of acquisition (AOA) particularly on syntactic processing; however, the processing of word order in a sentence context has not yet been examined specifically. We used…

  10. Beginners Remember Orthography when They Learn to Read Words: The Case of Doubled Letters

    Science.gov (United States)

    Wright, Donna-Marie; Ehri, Linnea C.

    2007-01-01

    Sight word learning and memory were studied to clarify how early during development readers process visual letter patterns that are not dictated by phonology, and whether their word learning is influenced by the legality of letter patterns. Forty kindergartners and first graders were taught to read 12 words containing either single consonants…

  11. The Effects of Word Exposure Frequency and Elaboration of Word Processing on Incidental L2 Vocabulary Acquisition through Reading

    Science.gov (United States)

    Eckerth, Johannes; Tavakoli, Parveneh

    2012-01-01

    Research on incidental second language (L2) vocabulary acquisition through reading has claimed that repeated encounters with unfamiliar words and the relative elaboration of processing these words facilitate word learning. However, so far both variables have been investigated in isolation. To help close this research gap, the current study…

  12. Relevance of useful visual words in object retrieval

    Science.gov (United States)

    Qi, Siyuan; Luo, Yupin

    2013-07-01

    The most popular methods in object retrieval are almost based on bag-of-words(BOW) which is both effective and efficient. In this paper we present a method use the relations between words of the vocabulary to improve the retrieval performance based on the BOW framework. In basic BOW retrieval framework, only a few words of the vocabulary is useful for retrieval, which are spatial consistent in images. We introduce a method to useful select useful words and build a relevance between these words. We combine useful relevance with basic BOW framework and query expansion as well. The useful relevance is able to discover latent related words which is not exist in the query image, so that we can get a more accurate vector model for retrieval. Combined with query expansion method, the retrieval performance are better and fewer time cost.

  13. Word form Encoding in Chinese Word Naming and Word Typing

    Science.gov (United States)

    Chen, Jenn-Yeu; Li, Cheng-Yi

    2011-01-01

    The process of word form encoding was investigated in primed word naming and word typing with Chinese monosyllabic words. The target words shared or did not share the onset consonants with the prime words. The stimulus onset asynchrony (SOA) was 100 ms or 300 ms. Typing required the participants to enter the phonetic letters of the target word,…

  14. The Influence of Sex Information on Gender Word Processing

    Science.gov (United States)

    Casado, Alba; Palma, Alfonso; Paolieri, Daniela

    2018-01-01

    Three different tasks (word repetition, lexical decision, and gender decision) were designed to explore the impact of the sex clues (sex of the speaker, sex of the addressee) and the type of gender (semantic, arbitrary) on the processing of isolated Spanish gendered words. The findings showed that the grammatical gender feature was accessed when…

  15. Not all reading is alike: Task modulation of magnetic evoked response to visual word

    Directory of Open Access Journals (Sweden)

    Pavlova A. A.

    2017-09-01

    Full Text Available Background. Previous studies have shown that brain response to a written word depends on the task: whether the word is a target in a version of lexical decision task or should be read silently. Although this effect has been interpreted as an evidence for an interaction between word recognition processes and task demands, it also may be caused by greater attention allocation to the target word. Objective. We aimed to examine the task effect on brain response evoked by non- target written words. Design. Using MEG and magnetic source imaging, we compared spatial-temporal pattern of brain response elicited by a noun cue when it was read silently either without additional task (SR or with a requirement to produce an associated verb (VG. Results.The task demands penetrated into early (200-300 ms and late (500-800 ms stages of a word processing by enhancing brain response under VG versus SR condition. The cortical sources of the early response were localized to bilateral inferior occipitotemporal and anterior temporal cortex suggesting that more demanding VG task required elaborated lexical-semantic analysis. The late effect was observed in the associative auditory areas in middle and superior temporal gyri and in motor representation of articulators. Our results suggest that a remote goal plays a pivotal role in enhanced recruitment of cortical structures underlying orthographic, semantic and sensorimotor dimensions of written word perception from the early processing stages. Surprisingly, we found that to fulfil a more challenging goal the brain progressively engaged resources of the right hemisphere throughout all stages of silent reading. Conclusion. Our study demonstrates that a deeper processing of linguistic input amplifies activation of brain areas involved in integration of speech perception and production. This is consistent with theories that emphasize the role of sensorimotor integration in speech understanding.

  16. Feature-Specific Event-Related Potential Effects to Action- and Sound-Related Verbs during Visual Word Recognition.

    Science.gov (United States)

    Popp, Margot; Trumpp, Natalie M; Kiefer, Markus

    2016-01-01

    Grounded cognition theories suggest that conceptual representations essentially depend on modality-specific sensory and motor systems. Feature-specific brain activation across different feature types such as action or audition has been intensively investigated in nouns, while feature-specific conceptual category differences in verbs mainly focused on body part specific effects. The present work aimed at assessing whether feature-specific event-related potential (ERP) differences between action and sound concepts, as previously observed in nouns, can also be found within the word class of verbs. In Experiment 1, participants were visually presented with carefully matched sound and action verbs within a lexical decision task, which provides implicit access to word meaning and minimizes strategic access to semantic word features. Experiment 2 tested whether pre-activating the verb concept in a context phase, in which the verb is presented with a related context noun, modulates subsequent feature-specific action vs. sound verb processing within the lexical decision task. In Experiment 1, ERP analyses revealed a differential ERP polarity pattern for action and sound verbs at parietal and central electrodes similar to previous results in nouns. Pre-activation of the meaning of verbs in the preceding context phase in Experiment 2 resulted in a polarity-reversal of feature-specific ERP effects in the lexical decision task compared with Experiment 1. This parallels analogous earlier findings for primed action and sound related nouns. In line with grounded cognitions theories, our ERP study provides evidence for a differential processing of action and sound verbs similar to earlier observation for concrete nouns. Although the localizational value of ERPs must be viewed with caution, our results indicate that the meaning of verbs is linked to different neural circuits depending on conceptual feature relevance.

  17. An Action Research on Deep Word Processing Strategy Instruction

    Science.gov (United States)

    Zhang, Limei

    2010-01-01

    For too long a time, how to memorize more words and keep them longer in mind has been a primary and everlasting problem for vocabulary teaching and learning. This study focused on deep processing as a word memorizing strategy in contextualizing, de- and re- contextualizing learning stages. It also examined possible effects of such pedagogy on…

  18. VStops: A Thinking Strategy and Visual Representation Approach in Mathematical Word Problem Solving toward Enhancing STEM Literacy

    Science.gov (United States)

    Abdullah, Nasarudin; Halim, Lilia; Zakaria, Effandi

    2014-01-01

    This study aimed to determine the impact of strategic thinking and visual representation approaches (VStops) on the achievement, conceptual knowledge, metacognitive awareness, awareness of problem-solving strategies, and student attitudes toward mathematical word problem solving among primary school students. The experimental group (N = 96)…

  19. Greek-English Word Processing on the Macintosh.

    Science.gov (United States)

    Rusten, Jeffrey

    1986-01-01

    Discusses the complete Greek-English word processing system of the Apple Macintosh computer. Describes the features of its operating system, shows how the Greek fonts look and work, and enumerates both the advantages and drawbacks of the Macintosh. (SED)

  20. Interference of spoken word recognition through phonological priming from visual objects and printed words

    NARCIS (Netherlands)

    McQueen, J.M.; Hüttig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase

  1. The Concreteness Effect and the Bilingual Lexicon: The Impact of Visual Stimuli Attachment on Meaning Recall of Abstract L2 Words

    Science.gov (United States)

    Farley, Andrew P.; Ramonda, Kris; Liu, Xun

    2012-01-01

    According to the Dual-Coding Theory (Paivio & Desrochers, 1980), words that are associated with rich visual imagery are more easily learned than abstract words due to what is termed the concreteness effect (Altarriba & Bauer, 2004; de Groot, 1992, de Groot et al., 1994; ter Doest & Semin, 2005). The present study examined the effects of attaching…

  2. Word Processing for All.

    Science.gov (United States)

    Abbott, Chris

    1991-01-01

    Pupils with special educational needs are finding that the use of word processors can give them a new confidence and pride in their own abilities. This article describes the use of such devices as the "mouse," on-screen word lists, spell checkers, and overlay keyboards. (JDD)

  3. Skipping of Chinese characters does not rely on word-based processing.

    Science.gov (United States)

    Lin, Nan; Angele, Bernhard; Hua, Huimin; Shen, Wei; Zhou, Junyi; Li, Xingshan

    2018-02-01

    Previous eye-movement studies have indicated that people tend to skip extremely high-frequency words in sentence reading, such as "the" in English and "/de" in Chinese. Two alternative hypotheses have been proposed to explain how this frequent skipping happens in Chinese reading: one assumes that skipping happens when the preview has been fully identified at the word level (word-based skipping); the other assumes that skipping happens whenever the preview character is easy to identify regardless of whether lexical processing has been completed or not (character-based skipping). Using the gaze-contingent display change paradigm, we examined the two hypotheses by substituting the preview of the third character of a four-character Chinese word with the high-frequency Chinese character "/de", which should disrupt the ongoing word-level processing. The character-based skipping hypothesis predicts that this manipulation will enhance the skipping probability of the target character (i.e., the third character of the target word), because the character "/de" has much higher character frequency than the original character. The word-based skipping hypothesis instead predicts a reduction of the skipping probability of the target character because the presence of the character "/de" is lexically infelicitous at word level. The results supported the character-based skipping hypothesis, indicating that in Chinese reading the decision of skipping a character can be made before integrating it into a word.

  4. Neurophysiological correlates of word processing deficits in isolated reading and isolated spelling disorders.

    Science.gov (United States)

    Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina

    2018-03-01

    In consistent orthographies, isolated reading disorders (iRD) and isolated spelling disorders (iSD) are nearly as common as combined reading-spelling disorders (cRSD). However, the exact nature of the underlying word processing deficits in isolated versus combined literacy deficits are not well understood yet. We applied a phonological lexical decision task (including words, pseudohomophones, legal and illegal pseudowords) during ERP recording to investigate the neurophysiological correlates of lexical and sublexical word-processing in children with iRD, iSD and cRSD compared to typically developing (TD) 9-year-olds. TD children showed enhanced early sensitivity (N170) for word material and for the violation of orthographic rules compared to the other groups. Lexical orthographic effects (higher LPC amplitude for words than for pseudohomophones) were the same in the TD and iRD groups, although processing took longer in children with iRD. In the iSD and cRSD groups, lexical orthographic effects were evident and stable over time only for correctly spelled words. Orthographic representations were intact in iRD children, but word processing took longer compared to TD. Children with spelling disorders had partly missing orthographic representations. Our study is the first to specify the underlying neurophysiology of word processing deficits associated with isolated literacy deficits. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  5. The Mechanism of Valence-Space Metaphors: ERP Evidence for Affective Word Processing

    Science.gov (United States)

    Xie, Jiushu; Wang, Ruiming; Chang, Song

    2014-01-01

    Embodied cognition contends that the representation and processing of concepts involve perceptual, somatosensory, motoric, and other physical re-experiencing information. In this view, affective concepts are also grounded in physical information. For instance, people often say “feeling down” or “cheer up” in daily life. These phrases use spatial information to understand affective concepts. This process is referred to as valence-space metaphor. Valence-space metaphors refer to the employment of spatial information (lower/higher space) to elaborate affective concepts (negative/positive concepts). Previous studies have demonstrated that processing affective words affects performance on a spatial detection task. However, the mechanism(s) behind this effect remain unclear. In the current study, we hypothesized that processing affective words might produce spatial information. Consequently, spatial information would affect the following spatial cue detection/discrimination task. In Experiment 1, participants were asked to remember an affective word. Then, they completed a spatial cue detection task while event-related potentials were recorded. The results indicated that the top cues induced enhanced amplitude of P200 component while participants kept positive words relative to negative words in mind. On the contrary, the bottom cues induced enhanced P200 amplitudes while participants kept negative words relative to positive words in mind. In Experiment 2, we conducted a behavioral experiment that employed a similar paradigm to Experiment 1, but used arrows instead of dots to test the attentional nature of the valence-space metaphor. We found a similar facilitation effect as found in Experiment 1. Positive words facilitated the discrimination of upper arrows, whereas negative words facilitated the discrimination of lower arrows. In summary, affective words might activate spatial information and cause participants to allocate their attention to corresponding

  6. The mechanism of valence-space metaphors: ERP evidence for affective word processing.

    Science.gov (United States)

    Xie, Jiushu; Wang, Ruiming; Chang, Song

    2014-01-01

    Embodied cognition contends that the representation and processing of concepts involve perceptual, somatosensory, motoric, and other physical re-experiencing information. In this view, affective concepts are also grounded in physical information. For instance, people often say "feeling down" or "cheer up" in daily life. These phrases use spatial information to understand affective concepts. This process is referred to as valence-space metaphor. Valence-space metaphors refer to the employment of spatial information (lower/higher space) to elaborate affective concepts (negative/positive concepts). Previous studies have demonstrated that processing affective words affects performance on a spatial detection task. However, the mechanism(s) behind this effect remain unclear. In the current study, we hypothesized that processing affective words might produce spatial information. Consequently, spatial information would affect the following spatial cue detection/discrimination task. In Experiment 1, participants were asked to remember an affective word. Then, they completed a spatial cue detection task while event-related potentials were recorded. The results indicated that the top cues induced enhanced amplitude of P200 component while participants kept positive words relative to negative words in mind. On the contrary, the bottom cues induced enhanced P200 amplitudes while participants kept negative words relative to positive words in mind. In Experiment 2, we conducted a behavioral experiment that employed a similar paradigm to Experiment 1, but used arrows instead of dots to test the attentional nature of the valence-space metaphor. We found a similar facilitation effect as found in Experiment 1. Positive words facilitated the discrimination of upper arrows, whereas negative words facilitated the discrimination of lower arrows. In summary, affective words might activate spatial information and cause participants to allocate their attention to corresponding locations

  7. The effect of visual and verbal modes of presentation on children's retention of images and words

    Science.gov (United States)

    Vasu, Ellen Storey; Howe, Ann C.

    This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.

  8. The word-length effect and disyllabic words.

    Science.gov (United States)

    Lovatt, P; Avons, S E; Masterson, J

    2000-02-01

    Three experiments compared immediate serial recall of disyllabic words that differed on spoken duration. Two sets of long- and short-duration words were selected, in each case maximizing duration differences but matching for frequency, familiarity, phonological similarity, and number of phonemes, and controlling for semantic associations. Serial recall measures were obtained using auditory and visual presentation and spoken and picture-pointing recall. In Experiments 1a and 1b, using the first set of items, long words were better recalled than short words. In Experiments 2a and 2b, using the second set of items, no difference was found between long and short disyllabic words. Experiment 3 confirmed the large advantage for short-duration words in the word set originally selected by Baddeley, Thomson, and Buchanan (1975). These findings suggest that there is no reliable advantage for short-duration disyllables in span tasks, and that previous accounts of a word-length effect in disyllables are based on accidental differences between list items. The failure to find an effect of word duration casts doubt on theories that propose that the capacity of memory span is determined by the duration of list items or the decay rate of phonological information in short-term memory.

  9. Cortical reactions to verbal abuse: event-related brain potentials reflecting the processing of socially threatening words.

    Science.gov (United States)

    Wabnitz, Pascal; Martens, Ulla; Neuner, Frank

    2012-09-12

    Human information processing is sensitive to aversive stimuli, in particular to negative cues that indicate a threat to physical integrity. We investigated the extent to which these findings can be transferred to stimuli that are associated with a social rather than a physical threat. Event-related potentials were recorded during silent reading of neutral, positive, physically threatening, and socially threatening words, whereby socially threatening words were represented by swear words. We found facilitated processing of positive and physically threatening words in contrast to both neutral and socially threatening words at a first potential that emerged at about 120 ms after stimulus onset. At a semantic processing stage reflected by the N400, processing of all classes of affective words, including socially threatening words, differed from neutral words. We conclude that socially threatening words as well as neutral words capture more attentional resources than positive and physically threatening words at early stages. However, social threatening words are processed in a manner similar to other emotional words and different from neutral words at higher levels.

  10. Language Identification of Kannada, Hindi and English Text Words Through Visual Discriminating Features

    Directory of Open Access Journals (Sweden)

    M.C. Padma

    2008-06-01

    Full Text Available In a multilingual country like India, a document may contain text words in more than one language. For a multilingual environment, multi lingual Optical Character Recognition (OCR system is needed to read the multilingual documents. So, it is necessary to identify different language regions of the document before feeding the document to the OCRs of individual language. The objective of this paper is to propose visual clues based procedure to identify Kannada, Hindi and English text portions of the Indian multilingual document.

  11. Neighbourhood frequency effects in visual word recognition and naming

    NARCIS (Netherlands)

    Grainger, I.J.

    1988-01-01

    Two experiments are reported that examine the influence of a given word's ortllographic neighbours (orthographically similar words) on the recognition and pronunciation of that word. In Experiment 1 (lexical decision) neighbourhood frequency as opposed to stimulus-word frequency was shown to have a

  12. Visual processing speed in old age

    DEFF Research Database (Denmark)

    Habekost, Thomas; vogel, asmus; Rostrup, Egill

    2013-01-01

    of the speed of a particular psychological process that are not confounded by the speed of other processes. We used Bundesen's (1990) Theory of Visual Attention (TVA) to obtain specific estimates of processing speed in the visual system controlled for the influence of response latency and individual variations...... dramatic aging effects were found for the perception threshold and the visual apprehension span. In the visual domain, cognitive aging seems to be most clearly related to reductions in processing speed....

  13. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times.

    Science.gov (United States)

    Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia

    2018-02-12

    Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.

  15. Too Little, Too Late: Reduced Visual Span and Speed Characterize Pure Alexia

    Science.gov (United States)

    Habekost, Thomas; Leff, Alexander P.

    2009-01-01

    Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected. PMID:19366870

  16. Electrophysiological differences in the processing of affective information in words and pictures.

    Science.gov (United States)

    Hinojosa, José A; Carretié, Luis; Valcárcel, María A; Méndez-Bértolo, Constantino; Pozo, Miguel A

    2009-06-01

    It is generally assumed that affective picture viewing is related to higher levels of physiological arousal than is the reading of emotional words. However, this assertion is based mainly on studies in which the processing of either words or pictures has been investigated under heterogenic conditions. Positive, negative, relaxing, neutral, and background (stimulus fragments) words and pictures were presented to subjects in two experiments under equivalent experimental conditions. In Experiment 1, neutral words elicited an enhanced late positive component (LPC) that was associated with an increased difficulty in discriminating neutral from background stimuli. In Experiment 2, high-arousing pictures elicited an enhanced early negativity and LPC that were related to a facilitated processing for these stimuli. Thus, it seems that under some circumstances, the processing of affective information captures attention only with more biologically relevant stimuli. Also, these data might be better interpreted on the basis of those models that postulate a different access to affective information for words and pictures.

  17. A randomized controlled trial of cognitive training using a visual speed of processing intervention in middle aged and older adults.

    Directory of Open Access Journals (Sweden)

    Fredric D Wolinsky

    Full Text Available Age-related cognitive decline is common and may lead to substantial difficulties and disabilities in everyday life. We hypothesized that 10 hours of visual speed of processing training would prevent age-related declines and potentially improve cognitive processing speed.Within two age bands (50-64 and ≥ 65 681 patients were randomized to (a three computerized visual speed of processing training arms (10 hours on-site, 14 hours on-site, or 10 hours at-home or (b an on-site attention control group using computerized crossword puzzles for 10 hours. The primary outcome was the Useful Field of View (UFOV test, and the secondary outcomes were the Trail Making (Trails A and B Tests, Symbol Digit Modalities Test (SDMT, Stroop Color and Word Tests, Controlled Oral Word Association Test (COWAT, and the Digit Vigilance Test (DVT, which were assessed at baseline and at one year. 620 participants (91% completed the study and were included in the analyses. Linear mixed models were used with Blom rank transformations within age bands.All intervention groups had (p<0.05 small to medium standardized effect size improvements on UFOV (Cohen's d = -0.322 to -0.579, depending on intervention arm, Trails A (d = -0.204 to -0.265, Trails B (d = -0.225 to -0.320, SDMT (d = 0.263 to 0.351, and Stroop Word (d = 0.240 to 0.271. Converted to years of protection against age-related cognitive declines, these effects reflect 3.0 to 4.1 years on UFOV, 2.2 to 3.5 years on Trails A, 1.5 to 2.0 years on Trails B, 5.4 to 6.6 years on SDMT, and 2.3 to 2.7 years on Stroop Word.Visual speed of processing training delivered on-site or at-home to middle-aged or older adults using standard home computers resulted in stabilization or improvement in several cognitive function tests. Widespread implementation of this intervention is feasible.ClinicalTrials.gov NCT-01165463.

  18. Reading component skills in dyslexia: word recognition, comprehension and processing speed.

    Science.gov (United States)

    de Oliveira, Darlene G; da Silva, Patrícia B; Dias, Natália M; Seabra, Alessandra G; Macedo, Elizeu C

    2014-01-01

    The cognitive model of reading comprehension (RC) posits that RC is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the RC model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years) were divided in a Dyslexic Group (DG; 18 children, MA = 10.78, SD = 1.66) and control group (CG 22 children, MA = 10.59, SD = 1.86). All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and RC, word recognition, processing speed, picture naming, receptive vocabulary, and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and RC, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items) and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on RC test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  19. 32 CFR Appendix D to Part 323 - Word Processing Center (WPC) Safeguards

    Science.gov (United States)

    2010-07-01

    ... DEFENSE (CONTINUED) PRIVACY PROGRAM DEFENSE LOGISTICS AGENCY PRIVACY PROGRAM Pt. 323, App. D Appendix D to... (WPCs) operating independent of the customer's function. However, managers of word processing systems... addressed. C. Safeguarding Information During Receipt. 1. The word processing manager will establish...

  20. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing.

    Science.gov (United States)

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word's perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output.

  1. Build an Interactive Word Wall

    Science.gov (United States)

    Jackson, Julie

    2018-01-01

    Word walls visually display important vocabulary covered during class. Although teachers have often been encouraged to post word walls in their classrooms, little information is available to guide them. This article describes steps science teachers can follow to transform traditional word walls into interactive teaching tools. It also describes a…

  2. Syllable Transposition Effects in Korean Word Recognition

    Science.gov (United States)

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  3. Reading impairment in schizophrenia: dysconnectivity within the visual system.

    Science.gov (United States)

    Vinckier, Fabien; Cohen, Laurent; Oppenheim, Catherine; Salvador, Alexandre; Picard, Hernan; Amado, Isabelle; Krebs, Marie-Odile; Gaillard, Raphaël

    2014-01-01

    Patients with schizophrenia suffer from perceptual visual deficits. It remains unclear whether those deficits result from an isolated impairment of a localized brain process or from a more diffuse long-range dysconnectivity within the visual system. We aimed to explore, with a reading paradigm, the functioning of both ventral and dorsal visual pathways and their interaction in schizophrenia. Patients with schizophrenia and control subjects were studied using event-related functional MRI (fMRI) while reading words that were progressively degraded through word rotation or letter spacing. Reading intact or minimally degraded single words involves mainly the ventral visual pathway. Conversely, reading in non-optimal conditions involves both the ventral and the dorsal pathway. The reading paradigm thus allowed us to study the functioning of both pathways and their interaction. Behaviourally, patients with schizophrenia were selectively impaired at reading highly degraded words. While fMRI activation level was not different between patients and controls, functional connectivity between the ventral and dorsal visual pathways increased with word degradation in control subjects, but not in patients. Moreover, there was a negative correlation between the patients' behavioural sensitivity to stimulus degradation and dorso-ventral connectivity. This study suggests that perceptual visual deficits in schizophrenia could be related to dysconnectivity between dorsal and ventral visual pathways. © 2013 Published by Elsevier Ltd.

  4. Distance-Dependent Processing of Pictures and Words

    Science.gov (United States)

    Amit, Elinor; Algom, Daniel; Trope, Yaacov

    2009-01-01

    A series of 8 experiments investigated the association between pictorial and verbal representations and the psychological distance of the referent objects from the observer. The results showed that people better process pictures that represent proximal objects and words that represent distal objects than pictures that represent distal objects and…

  5. Word meaning in the ventral visual path: a perceptual to conceptual gradient of semantic coding.

    Science.gov (United States)

    Borghesani, Valentina; Pedregosa, Fabian; Buiatti, Marco; Amadon, Alexis; Eger, Evelyn; Piazza, Manuela

    2016-12-01

    The meaning of words referring to concrete items is thought of as a multidimensional representation that includes both perceptual (e.g., average size, prototypical color) and conceptual (e.g., taxonomic class) dimensions. Are these different dimensions coded in different brain regions? In healthy human subjects, we tested the presence of a mapping between the implied real object size (a perceptual dimension) and the taxonomic categories at different levels of specificity (conceptual dimensions) of a series of words, and the patterns of brain activity recorded with functional magnetic resonance imaging in six areas along the ventral occipito-temporal cortical path. Combining multivariate pattern classification and representational similarity analysis, we found that the real object size implied by a word appears to be primarily encoded in early visual regions, while the taxonomic category and sub-categorical cluster in more anterior temporal regions. This anteroposterior gradient of information content indicates that different areas along the ventral stream encode complementary dimensions of the semantic space. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Differential cognitive processing of Kanji and Kana words: do orthographic and semantic codes function in parallel in word matching task.

    Science.gov (United States)

    Kawakami, A; Hatta, T; Kogure, T

    2001-12-01

    Relative engagements of the orthographic and semantic codes in Kanji and Hiragana word recognition were investigated. In Exp. 1, subjects judged whether the pairs of Kanji words (prime and target) presented sequentially were physically identical to each other in the word condition. In the sentence condition, subjects decided whether the target word was valid for the prime sentence presented in advance. The results showed that the response times to the target swords orthographically similar (to the prime) were significantly slower than to semantically related target words in the word condition and that this was also the case in the sentence condition. In Exp. 2, subjects judged whether the target word written in Hiragana was physically identical to the prime word in the word condition. In the sentence condition, subjects decided if the target word was valid for the previously presented prime sentence. Analysis indicated that response times to orthographically similar words were slower than to semantically related words in the word condition but not in the sentence condition wherein the response times to the semantically and orthographically similar words were largely the same. Based on these results, differential contributions of orthographic and semantic codes in cognitive processing of Japanese Kanji and Hiragana words was discussed.

  7. Reading component skills in dyslexia: word recognition, comprehension and processing speed

    Directory of Open Access Journals (Sweden)

    Darlene Godoy Oliveira

    2014-11-01

    Full Text Available The cognitive model of reading comprehension posits that reading comprehension is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the reading comprehension model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years were divided in a Dyslexic Group (DG, 18 children, MA = 10.78, SD = 1.66 and Control Group (CG 22 children, MA = 10.59, SD = 1.86. All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and reading comprehension, word recognition, processing speed, picture naming, receptive vocabulary and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and reading comprehension, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on reading comprehension test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  8. Summary Report of National Study of Word Processing Installations in Selected Business Organizations. A Summary of a Report on the National Word Processing Research Study of Delta Pi Epsilon.

    Science.gov (United States)

    Scriven, Jolene D.; And Others

    A study sought to determine current practices in word processing installations located in selected organizations throughout the United States. A related problem was to ascertain anticipated future developments in word processing to provide information for educational institutions preparing workers for the business office. Six interview instruments…

  9. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing

    Directory of Open Access Journals (Sweden)

    Ensie eAbbassi

    2015-07-01

    Full Text Available Emotional words are processed rapidly and automatically in the left hemisphere (LH and slowly, with the involvement of attention, in the right hemisphere (RH. This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word’s perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output.

  10. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing

    Science.gov (United States)

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I.; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word’s perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output. PMID:26217288

  11. Interference of spoken word recognition through phonological priming from visual objects and printed words

    OpenAIRE

    McQueen, J.; Huettig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of pre-exposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g...

  12. Interference Effects on the Recall of Pictures, Printed Words and Spoken Words.

    Science.gov (United States)

    Burton, John K.; Bruning, Roger H.

    Thirty college undergraduates participated in a study of the effects of acoustic and visual interference on the recall of word and picture triads in both short-term and long-term memory. The subjects were presented 24 triads of monosyllabic nouns representing all of the possible combinations of presentation types: pictures, printed words, and…

  13. Implicit integration in a case of integrative visual agnosia.

    Science.gov (United States)

    Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo

    2007-05-15

    We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.

  14. Brain activation during word identification and word recognition

    DEFF Research Database (Denmark)

    Jernigan, Terry L.; Ostergaard, Arne L.; Law, Ian

    1998-01-01

    Previous memory research has suggested that the effects of prior study observed in priming tasks are functionally, and neurobiologically, distinct phenomena from the kind of memory expressed in conventional (explicit) memory tests. Evidence for this position comes from observed dissociations...... between memory scores obtained with the two kinds of tasks. However, there is continuing controversy about the meaning of these dissociations. In recent studies, Ostergaard (1998a, Memory Cognit. 26:40-60; 1998b, J. Int. Neuropsychol. Soc., in press) showed that simply degrading visual word stimuli can...... dramatically alter the degree to which word priming shows a dissociation from word recognition; i.e., effects of a number of factors on priming paralleled their effects on recognition memory tests when the words were degraded at test. In the present study, cerebral blood flow changes were measured while...

  15. Susceptibility to a multisensory speech illusion in older persons is driven by perceptual processes

    Directory of Open Access Journals (Sweden)

    Annalisa eSetti

    2013-09-01

    Full Text Available Recent studies suggest that multisensory integration is enhanced in older adults but it is not known whether this enhancement is solely driven by perceptual processes or affected by cognitive processes. Using the ‘McGurk illusion’, in Experiment 1 we found that audio-visual integration of incongruent audio-visual words was higher in older adults than in younger adults, although the recognition of either audio- or visual-only presented words was the same across groups. In Experiment 2 we tested recall of sentences within which an incongruent audio-visual speech word was embedded. The overall semantic meaning of the sentence was compatible with either one of the unisensory components of the target word and/or with the illusory percept. Older participants recalled more illusory audio-visual words in sentences than younger adults, however, there was no differential effect of word compatibility on recall for the two groups. Our findings suggest that the relatively high susceptibility to the audio-visual speech illusion in older participants is due more to perceptual than cognitive processing.

  16. Visual processing speed in old age.

    Science.gov (United States)

    Habekost, Thomas; Vogel, Asmus; Rostrup, Egill; Bundesen, Claus; Kyllingsbaek, Søren; Garde, Ellen; Ryberg, Charlotte; Waldemar, Gunhild

    2013-04-01

    Mental speed is a common concept in theories of cognitive aging, but it is difficult to get measures of the speed of a particular psychological process that are not confounded by the speed of other processes. We used Bundesen's (1990) Theory of Visual Attention (TVA) to obtain specific estimates of processing speed in the visual system controlled for the influence of response latency and individual variations of the perception threshold. A total of 33 non-demented old people (69-87 years) were tested for the ability to recognize briefly presented letters. Performance was analyzed by the TVA model. Visual processing speed decreased approximately linearly with age and was on average halved from 70 to 85 years. Less dramatic aging effects were found for the perception threshold and the visual apprehension span. In the visual domain, cognitive aging seems to be most clearly related to reductions in processing speed. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  17. Word Processing in Dyslexics: An Automatic Decoding Deficit?

    Science.gov (United States)

    Yap, Regina; Van Der Leu, Aryan

    1993-01-01

    Compares dyslexic children with normal readers on measures of phonological decoding and automatic word processing. Finds that dyslexics have a deficit in automatic phonological decoding skills. Discusses results within the framework of the phonological deficit and the automatization deficit hypotheses. (RS)

  18. The Relationships among Cognitive Correlates and Irregular Word, Non-Word, and Word Reading

    Science.gov (United States)

    Abu-Hamour, Bashir; University, Mu'tah; Urso, Annmarie; Mather, Nancy

    2012-01-01

    This study explored four hypotheses: (a) the relationships among rapid automatized naming (RAN) and processing speed (PS) to irregular word, non-word, and word reading; (b) the predictive power of various RAN and PS measures, (c) the cognitive correlates that best predicted irregular word, non-word, and word reading, and (d) reading performance of…

  19. Short-term retention of pictures and words as a function of type of distraction and length of delay interval.

    Science.gov (United States)

    Pellegrino, J W; Siegel, A W; Dhawan, M

    1976-01-01

    Picture and word triads were tested in a Brown-Peterson short-term retention task at varying delay intervals (3, 10, or 30 sec) and under acoustic and simultaneous acoustic and visual distraction. Pictures were superior to words at all delay intervals under single acoustic distraction. Dual distraction consistently reduced picture retention while simultaneously facilitating word retention. The results were interpreted in terms of the dual coding hypothesis with modality-specific interference effects in the visual and acoustic processing systems. The differential effects of dual distraction were related to the introduction of visual interference and differential levels of functional acoustic interference across dual and single distraction tasks. The latter was supported by a constant 2/1 ratio in the backward counting rates of the acoustic vs. dual distraction tasks. The results further suggest that retention may not depend on total processing load of the distraction task, per se, but rather that processing load operates within modalities.

  20. Visual word learning in adults with dyslexia

    Directory of Open Access Journals (Sweden)

    Rosa Kit Wan Kwok

    2014-05-01

    Full Text Available We investigated word learning in university and college students with a diagnosis of dyslexia and in typically-reading controls. Participants read aloud short (4-letter and longer (7-letter nonwords as quickly as possible. The nonwords were repeated across 10 blocks, using a different random order in each block. Participants returned 7 days later and repeated the experiment. Accuracy was high in both groups. The dyslexics were substantially slower than the controls at reading the nonwords throughout the experiment. They also showed a larger length effect, indicating less effective decoding skills. Learning was demonstrated by faster reading of the nonwords across repeated presentations and by a reduction in the difference in reading speeds between shorter and longer nonwords. The dyslexics required more presentations of the nonwords before the length effect became non-significant, only showing convergence in reaction times between shorter and longer items in the second testing session where controls achieved convergence part-way through the first session. Participants also completed a psychological test battery assessing reading and spelling, vocabulary, phonological awareness, working memory, nonverbal ability and motor speed. The dyslexics performed at a similar level to the controls on nonverbal ability but significantly less well on all the other measures. Regression analyses found that decoding ability, measured as the speed of reading aloud nonwords when they were presented for the first time, was predicted by a composite of word reading and spelling scores (‘literacy’. Word learning was assessed in terms of the improvement in naming speeds over 10 blocks of training. Learning was predicted by vocabulary and working memory scores, but not by literacy, phonological awareness, nonverbal ability or motor speed. The results show that young dyslexic adults have problems both in pronouncing novel words and in learning new written words.

  1. Cascading activation from lexical processing to letter-level processing in written word production.

    Science.gov (United States)

    Buchwald, Adam; Falconer, Carolyn

    2014-01-01

    Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.

  2. Examining the central and peripheral processes of written word production through meta-analysis

    Directory of Open Access Journals (Sweden)

    Jeremy ePurcell

    2011-10-01

    Full Text Available Producing written words requires central cognitive processes (such as orthographic long-term and working memory as well as more peripheral processes responsible for generating the motor actions needed for producing written words in a variety of formats (handwriting, typing, etc.. In recent years, various functional neuroimaging studies have examined the neural substrates underlying the central and peripheral processes of written word production. This study provides the first quantitative meta-analysis of these studies by applying Activation Likelihood Estimation methods (Turkeltaub et al., 2002. For alphabet languages, we identified 11 studies (with a total of 17 experimental contrasts that had been designed to isolate central and/or peripheral processes of word spelling (total number of participants = 146. Three ALE meta-analyses were carried out. One involved the complete set of 17 contrasts; two others were applied to subsets of contrasts to distinguish the neural substrates of central from peripheral processes. These analyses identified a network of brain regions reliably associated with the central and peripheral processes of word spelling. Among the many significant results, is the finding that the regions with the greatest correspondence across studies were in the left inferior temporal/fusiform gyri and left inferior frontal gyrus. Furthermore, although the angular gyrus has traditionally been identified as a key site within the written word production network, none of the meta-analyses found it to be a consistent site of activation, identifying instead a region just superior/medial to the left angular gyrus in the left posterior intraparietal sulcus. In general these meta-analyses and the discussion of results provide a valuable foundation upon which future studies that examine the neural basis of written word production can build.

  3. Modality effects in delayed free recall and recognition: visual is better than auditory.

    Science.gov (United States)

    Penney, C G

    1989-08-01

    During presentation of auditory and visual lists of words, different groups of subjects generated words that either rhymed with the presented words or that were associates. Immediately after list presentation, subjects recalled either the presented or the generated words. After presentation and test of all lists, a final free recall test and a recognition test were given. Visual presentation generally produced higher recall and recognition than did auditory presentation for both encoding conditions. The results are not consistent with explanations of modality effects in terms of echoic memory or greater temporal distinctiveness of auditory items. The results are more in line with the separate-streams hypothesis, which argues for different kinds of input processing for auditory and visual items.

  4. Author’s response: A universal approach to modeling visual word recognition and reading: not only possible, but also inevitable.

    Science.gov (United States)

    Frost, Ram

    2012-10-01

    I have argued that orthographic processing cannot be understood and modeled without considering the manner in which orthographic structure represents phonological, semantic, and morphological information in a given writing system. A reading theory, therefore, must be a theory of the interaction of the reader with his/her linguistic environment. This outlines a novel approach to studying and modeling visual word recognition, an approach that focuses on the common cognitive principles involved in processing printed words across different writing systems. These claims were challenged by several commentaries that contested the merits of my general theoretical agenda, the relevance of the evolution of writing systems, and the plausibility of finding commonalities in reading across orthographies. Other commentaries extended the scope of the debate by bringing into the discussion additional perspectives. My response addresses all these issues. By considering the constraints of neurobiology on modeling reading, developmental data, and a large scope of cross-linguistic evidence, I argue that front-end implementations of orthographic processing that do not stem from a comprehensive theory of the complex information conveyed by writing systems do not present a viable approach for understanding reading. The common principles by which writing systems have evolved to represent orthographic, phonological, and semantic information in a language reveal the critical distributional characteristics of orthographic structure that govern reading behavior. Models of reading should thus be learning models, primarily constrained by cross-linguistic developmental evidence that describes how the statistical properties of writing systems shape the characteristics of orthographic processing. When this approach is adopted, a universal model of reading is possible.

  5. Dissociating Visual Form from Lexical Frequency Using Japanese

    Science.gov (United States)

    Twomey, Tae; Duncan, Keith J. Kawabata; Hogan, John S.; Morita, Kenji; Umeda, Kazumasa; Sakai, Katsuyuki; Devlin, Joseph T.

    2013-01-01

    In Japanese, the same word can be written in either morphographic Kanji or syllabographic Hiragana and this provides a unique opportunity to disentangle a word's lexical frequency from the frequency of its visual form--an important distinction for understanding the neural information processing in regions engaged by reading. Behaviorally,…

  6. Is orthographic information from multiple parafoveal words processed in parallel: An eye-tracking study.

    Science.gov (United States)

    Cutter, Michael G; Drieghe, Denis; Liversedge, Simon P

    2017-08-01

    In the current study we investigated whether orthographic information available from 1 upcoming parafoveal word influences the processing of another parafoveal word. Across 2 experiments we used the boundary paradigm (Rayner, 1975) to present participants with an identity preview of the 2 words after the boundary (e.g., hot pan ), a preview in which 2 letters were transposed between these words (e.g., hop tan ), or a preview in which the same 2 letters were substituted (e.g., hob fan ). We hypothesized that if these 2 words were processed in parallel in the parafovea then we may observe significant preview benefits for the condition in which the letters were transposed between words relative to the condition in which the letters were substituted. However, no such effect was observed, with participants fixating the words for the same amount of time in both conditions. This was the case both when the transposition was made between the final and first letter of the 2 words (e.g., hop tan as a preview of hot pan ; Experiment 1) and when the transposition maintained within word letter position (e.g., pit hop as a preview of hit pop ; Experiment 2). The implications of these findings are considered in relation to serial and parallel lexical processing during reading. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Touching words is not enough: how visual experience influences haptic-auditory associations in the "Bouba-Kiki" effect.

    Science.gov (United States)

    Fryer, Louise; Freeman, Jonathan; Pring, Linda

    2014-08-01

    Since Köhler's experiments in the 1920s, researchers have demonstrated a correspondence between words and shapes. Dubbed the "Bouba-Kiki" effect, these auditory-visual associations extend across cultures and are thought to be universal. More recently the effect has been shown in other modalities including taste, suggesting the effect is independent of vision. The study presented here tested the "Bouba-Kiki" effect in the auditory-haptic modalities, using 2D cut-outs and 3D models based on Köhler's original drawings. Presented with shapes they could feel but not see, sighted participants showed a robust "Bouba-Kiki" effect. However, in a sample of people with a range of visual impairments, from congenital total blindness to partial sight, the effect was significantly less pronounced. The findings suggest that, in the absence of a direct visual stimulus, visual imagery plays a role in crossmodal integration. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. The Relation of Visual and Auditory Aptitudes to First Grade Low Readers' Achievement under Sight-Word and Systematic Phonic Instructions. Research Report #36.

    Science.gov (United States)

    Gallistel, Elizabeth; And Others

    Ten auditory and ten visual aptitude measures were administered in the middle of first grade to a sample of 58 low readers. More than half of this low reader sample had scored more than a year below expected grade level on two or more aptitudes. Word recognition measures were administered after four months of sight word instruction and again after…

  9. Motivational priming and processing interrupt: startle reflex modulation during shallow and deep processing of emotional words.

    Science.gov (United States)

    Herbert, Cornelia; Kissler, Johanna

    2010-05-01

    Valence-driven modulation of the startle reflex, that is larger eyeblinks during viewing of unpleasant pictures and inhibited blinks while viewing pleasant pictures, is well documented. The current study investigated, whether this motivational priming pattern also occurs during processing of unpleasant and pleasant words, and to what extent it is influenced by shallow vs. deep encoding of verbal stimuli. Emotional and neutral adjectives were presented for 5s, and the acoustically elicited startle eyeblink response was measured while subjects memorized the words by means of shallow or deep processing strategies. Results showed blink potentiation to unpleasant and blink inhibition to pleasant adjectives in subjects using shallow encoding strategies. In subjects using deep-encoding strategies, blinks were larger for pleasant than unpleasant or neutral adjectives. In line with this, free recall of pleasant words was also better in subjects who engaged in deep processing. The results suggest that motivational priming holds as long as processing is perceptual. However, during deep processing the startle reflex appears to represent a measure of "processing interrupt", facilitating blinks to those stimuli that are more deeply encoded. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Landmark Image Retrieval Using Visual Synonyms

    NARCIS (Netherlands)

    Gavves, E.; Snoek, C.G.M.

    2010-01-01

    In this paper, we consider the incoherence problem of the visual words in bag-of-words vocabularies. Different from existing work, which performs assignment of words based solely on closeness in descriptor space, we focus on identifying pairs of independent, distant words - the visual synonyms -

  11. Imaginal, semantic, and surface-level processing of concrete and abstract words: an electrophysiological investigation.

    Science.gov (United States)

    West, W C; Holcomb, P J

    2000-11-01

    Words representing concrete concepts are processed more quickly and efficiently than words representing abstract concepts. Concreteness effects have also been observed in studies using event-related brain potentials (ERPs). The aim of this study was to examine concrete and abstract words using both reaction time (RT) and ERP measurements to determine (1) at what point in the stream of cognitive processing concreteness effects emerge and (2) how different types of cognitive operations influence these concreteness effects. Three groups of subjects performed a sentence verification task in which the final word of each sentence was concrete or abstract. For each group the truthfulness judgment required either (1) image generation, (2) semantic decision, or (3) evaluation of surface characteristics. Concrete and abstract words produced similar RTs and ERPs in the surface task, suggesting that postlexical semantic processing is necessary to elicit concreteness effects. In both the semantic and imagery tasks, RTs were shorter for concrete than for abstract words. This difference was greatest in the imagery task. Also, in both of these tasks concrete words elicited more negative ERPs than abstract words between 300 and 550 msec (N400). This effect was widespread across the scalp and may reflect activation in a linguistic semantic system common to both concrete and abstract words. ERPs were also more negative for concrete than abstract words between 550 and 800 msec. This effect was more frontally distributed and was most evident in the imagery task. We propose that this later anterior effect represents a distinct ERP component (N700) that is sensitive to the use of mental imagery. The N700 may reflect the a access of specific characteristics of the imaged item or activation in a working memory system specific to mental imagery. These results also support the extended dual-coding hypothesis that superior associative connections and the use of mental imagery both contribute

  12. Word Recognition Processing Efficiency as a Component of Second Language Listening

    Science.gov (United States)

    Joyce, Paul

    2013-01-01

    This study investigated the application of the speeded lexical decision task to L2 aural processing efficiency. One-hundred and twenty Japanese university students completed an aural word/nonword task. When the variation of lexical decision time (CV) was correlated with reaction time (RT), the results suggested that the single-word recognition…

  13. Brain activation during direct and indirect processing of positive and negative words.

    Science.gov (United States)

    Straube, Thomas; Sauer, Andreas; Miltner, Wolfgang H R

    2011-09-12

    The effects of task conditions on brain activation to emotional stimuli are poorly understood. In this event-related fMRI study, brain activation to negative and positive words (matched for arousal) and neutral words was investigated under two task conditions. Subjects either had to attend to the emotional meaning (direct task) or to non-emotional features of the words (indirect task). Regardless of task, positive vs. negative words led to increased activation in the ventral medial prefrontal cortex, while negative vs. positive words induced increased activation of the insula. Compared to neutral words, all emotional words were associated with increased activation of the amygdala. Finally, the direct condition, as compared to the indirect condition, led to enhanced activation to emotional vs. neutral words in the dorsomedial prefrontal cortex and the anterior cingulate cortex. These results suggest valence and arousal dependent brain activation patterns that are partially modulated by participants' processing mode of the emotional stimuli. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Visual synonyms for landmark image retrieval

    NARCIS (Netherlands)

    Gavves, E.; Snoek, C.G.M.; Smeulders, A.W.M.

    2012-01-01

    In this paper, we address the incoherence problem of the visual words in bag-of-words vocabularies. Different from existing work, which assigns words based on closeness in descriptor space, we focus on identifying pairs of independent, distant words - the visual synonyms - that are likely to host

  15. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    Science.gov (United States)

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  16. Diverging receptive and expressive word processing mechanisms in a deep dyslexic reader.

    Science.gov (United States)

    Ablinger, Irene; Radach, Ralph

    2016-01-29

    We report on KJ, a patient with acquired dyslexia due to cerebral artery infarction. He represents an unusually clear case of an "output" deep dyslexic reader, with a distinct pattern of pure semantic reading. According to current neuropsychological models of reading, the severity of this condition is directly related to the degree of impairment in semantic and phonological representations and the resulting imbalance in the interaction between the two word processing pathways. The present work sought to examine whether an innovative eye movement supported intervention combining lexical and segmental therapy would strengthen phonological processing and lead to an attenuation of the extreme semantic over-involvement in KJ's word identification process. Reading performance was assessed before (T1) between (T2) and after (T3) therapy using both analyses of linguistic errors and word viewing patterns. Therapy resulted in improved reading aloud accuracy along with a change in error distribution that suggested a return to more sequential reading. Interestingly, this was in contrast to the dynamics of moment-to-moment word processing, as eye movement analyses still suggested a predominantly holistic strategy, even at T3. So, in addition to documenting the success of the therapeutic intervention, our results call for a theoretically important conclusion: Real-time letter and word recognition routines should be considered separately from properties of the verbal output. Combining both perspectives may provide a promising strategy for future assessment and therapy evaluation. Copyright © 2015. Published by Elsevier Ltd.

  17. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing

    OpenAIRE

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I.; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This pa...

  18. Modulation of the inter-hemispheric processing of semantic information during normal aging. A divided visual field experiment.

    Science.gov (United States)

    Hoyau, E; Cousin, E; Jaillard, A; Baciu, M

    2016-12-01

    We evaluated the effect of normal aging on the inter-hemispheric processing of semantic information by using the divided visual field (DVF) method, with words and pictures. Two main theoretical models have been considered, (a) the HAROLD model which posits that aging is associated with supplementary recruitment of the right hemisphere (RH) and decreased hemispheric specialization, and (b) the RH decline theory, which assumes that the RH becomes less efficient with aging, associated with increased LH specialization. Two groups of subjects were examined, a Young Group (YG) and an Old Group (OG), while participants performed a semantic categorization task (living vs. non-living) in words and pictures. The DVF was realized in two steps: (a) unilateral DVF presentation with stimuli presented separately in each visual field, left or right, allowing for their initial processing by only one hemisphere, right or left, respectively; (b) bilateral DVF presentation (BVF) with stimuli presented simultaneously in both visual fields, followed by their processing by both hemispheres. These two types of presentation permitted the evaluation of two main characteristics of the inter-hemispheric processing of information, the hemispheric specialization (HS) and the inter-hemispheric cooperation (IHC). Moreover, the BVF allowed determining the driver-hemisphere for processing information presented in BVF. Results obtained in OG indicated that: (a) semantic categorization was performed as accurately as YG, even if more slowly, (b) a non-semantic RH decline was observed, and (c) the LH controls the semantic processing during the BVF, suggesting an increased role of the LH in aging. However, despite the stronger involvement of the LH in OG, the RH is not completely devoid of semantic abilities. As discussed in the paper, neither the HAROLD nor the RH decline does fully explain this pattern of results. We rather suggest that the effect of aging on the hemispheric specialization and inter

  19. Visual Working Memory Storage Recruits Sensory Processing Areas

    NARCIS (Netherlands)

    Gayet, Surya; Paffen, Chris L E; Van der Stigchel, Stefan

    Human visual processing is subject to a dynamic influx of visual information. Visual working memory (VWM) allows for maintaining relevant visual information available for subsequent behavior. According to the dominating view, VWM recruits sensory processing areas to maintain this visual information

  20. Visual working memory storage recruits sensory processing areas

    NARCIS (Netherlands)

    Gayet, S.; Paffen, C.L.E.; Stigchel, S. van der

    2018-01-01

    Human visual processing is subject to a dynamic influx of visual information. Visual working memory (VWM) allows for maintaining relevant visual information available for subsequent behavior. According to the dominating view, VWM recruits sensory processing areas to maintain this visual information

  1. [French norms of imagery for pictures, for concrete and abstract words].

    Science.gov (United States)

    Robin, Frédérique

    2006-09-01

    This paper deals with French norms for mental image versus picture agreement for 138 pictures and the imagery value for 138 concrete words and 69 abstract words. The pictures were selected from Snodgrass et Vanderwart's norms (1980). The concrete words correspond to the dominant naming response to the pictorial stimuli. The abstract words were taken from verbal associative norms published by Ferrand (2001). The norms were established according to two variables: 1) mental image vs. picture agreement, and 2) imagery value of words. Three other variables were controlled: 1) picture naming agreement; 2) familiarity of objects referred to in the pictures and the concrete words, and 3) subjective verbal frequency of words. The originality of this work is to provide French imagery norms for the three kinds of stimuli usually compared in research on dual coding. Moreover, these studies focus on figurative and verbal stimuli variations in visual imagery processes.

  2. False recognition depends on depth of prior word processing: a magnetoencephalographic (MEG) study.

    Science.gov (United States)

    Walla, P; Hufnagl, B; Lindinger, G; Deecke, L; Imhof, H; Lang, W

    2001-04-01

    Brain activity was measured with a whole head magnetoencephalograph (MEG) during the test phases of word recognition experiments. Healthy young subjects had to discriminate between previously presented and new words. During prior study phases two different levels of word processing were provided according to two different kinds of instructions (shallow and deep encoding). Event-related fields (ERFs) associated with falsely recognized words (false alarms) were found to depend on the depth of processing during the prior study phase. False alarms elicited higher brain activity (as reflected by dipole strength) in case of prior deep encoding as compared to shallow encoding between 300 and 500 ms after stimulus onset at temporal brain areas. Between 500 and 700 ms we found evidence for differences in the involvement of neural structures related to both conditions of false alarms. Furthermore, the number of false alarms was found to depend on depth of processing. Shallow encoding led to a higher number of false alarms than deep encoding. All data are discussed as strong support for the ideas that a certain level of word processing is performed by a distinct set of neural systems and that the same neural systems which encode information are reactivated during the retrieval.

  3. A Review of the Effects of Visual-Spatial Representations and Heuristics on Word Problem Solving in Middle School Mathematics

    Science.gov (United States)

    Kribbs, Elizabeth E.; Rogowsky, Beth A.

    2016-01-01

    Mathematics word-problems continue to be an insurmountable challenge for many middle school students. Educators have used pictorial and schematic illustrations within the classroom to help students visualize these problems. However, the data shows that pictorial representations can be more harmful than helpful in that they only display objects or…

  4. When does picture naming take longer than word reading?

    Directory of Open Access Journals (Sweden)

    Andrea eValente

    2016-01-01

    Full Text Available Differences between the cognitive processes involved in word reading and picture naming are well established (e.g. visual or lexico-semantic stages. Still, it is commonly thought that retrieval of phonological forms is shared across tasks. We report a test of this second hypothesis based on the time course of electroencephalographic (EEG neural activity, reasoning that similar EEG patterns might index similar processing stages.Seventeen participants named objects and read aloud the corresponding words while their behavior and EEG activity were recorded. The latter was analyzed from stimulus onset onwards (stimulus-locked analysis and from response onset backwards (response-locked analysis, using non-parametric statistics and the spatio-temporal segmentation of ERPs.Behavioral results confirmed that reading entails shorter latencies than naming. The analysis of EEG activity within the stimulus-to-response period allowed distinguishing three phases, broadly successive. Early on, we observed identical distribution of electric field potentials (i.e. topographies albeit with large amplitude divergences between tasks. Then, we observed sustained cross-task differences in topographies accompanied by extended amplitude differences. Finally, the two tasks again revealed the same topographies, with significant cross-task delays in their onsets and offsets, and still significant amplitude differences. In the response-locked ERPs, the common topography displayed an offset closer to response articulation in word reading compared with picture naming, that is the transition between the offset of this shared map and the onset of articulation was significantly faster in word reading.The results suggest that the degree of cross-task similarity varies across time. The first phase suggests similar visual processes of variable intensity and time course across tasks, while the second phase suggests marked differences. Finally, similarities and differences within the

  5. An electrophysiological investigation of memory encoding, depth of processing, and word frequency in humans.

    Science.gov (United States)

    Guo, Chunyan; Zhu, Ying; Ding, Jinhong; Fan, Silu; Paller, Ken A

    2004-02-12

    Memory encoding can be studied by monitoring brain activity correlated with subsequent remembering. To understand brain potentials associated with encoding, we compared multiple factors known to affect encoding. Depth of processing was manipulated by requiring subjects to detect animal names (deep encoding) or boldface (shallow encoding) in a series of Chinese words. Recognition was more accurate with deep than shallow encoding, and for low- compared to high-frequency words. Potentials were generally more positive for subsequently recognized versus forgotten words; for deep compared to shallow processing; and, for remembered words only, for low- than for high-frequency words. Latency and topographic differences between these potentials suggested that several factors influence the effectiveness of encoding and can be distinguished using these methods, even with Chinese logographic symbols.

  6. Imprinting modulates processing of visual information in the visual wulst of chicks

    Directory of Open Access Journals (Sweden)

    Uchimura Motoaki

    2006-11-01

    Full Text Available Abstract Background Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. Results A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. Conclusion These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium.

  7. Effects of word width and word length on optimal character size for reading of horizontally scrolling Japanese words

    Directory of Open Access Journals (Sweden)

    Wataru eTeramoto

    2016-02-01

    Full Text Available The present study investigated whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of 4 Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants’ performance exceeded 88.9% correct rate was calculated for each character size (0.3, 0.6, 1.0, and 3.0° and scroll window size (5 or 10 character spaces. Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word. Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (3, 4, and 6 character words. Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length.

  8. Unimodal and multimodal regions for logographic language processing in left ventral occipitotemporal cortex

    Directory of Open Access Journals (Sweden)

    Yuan eDeng

    2013-09-01

    Full Text Available The human neocortex appears to contain a dedicated visual word form area (VWFA and an adjacent multimodal (visual/auditory area. However, these conclusions are based on functional magnetic resonance imaging (fMRI of alphabetic language processing, languages that have clear grapheme-to-phoneme correspondence (GPC rules that make it difficult to disassociate visual-specific processing from form-to-sound mapping. In contrast, the Chinese language has no clear GPC rules. Therefore, the current study examined whether native Chinese readers also have the same VWFA and multimodal area. Two cross-modal tasks, phonological retrieval of visual words and orthographic retrieval of auditory words, were adopted. Different task requirements were also applied to explore how different levels of cognitive processing modulate activation of putative VWFA-like and multimodal-like regions. Results showed that the left occipitotemporal sulcus responded exclusively to visual inputs and an adjacent region, the left inferior temporal gyrus, showed comparable activation for both visual and auditory inputs. Surprisingly, processing levels did not significantly alter activation of these two regions. These findings indicated that there are both unimodal and multimodal word areas for non-alphabetic language reading, and that activity in these two word-specific regions are independent of task demands at the linguistic level.

  9. Cross-modal integration of lexical-semantic features during word processing: evidence from oscillatory dynamics during EEG.

    Directory of Open Access Journals (Sweden)

    Markus J van Ackeren

    Full Text Available In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE. Each pair of features described properties from either the same modality (e.g., silver, tiny  =  visual features or different modalities (e.g., silver, loud  =  visual, auditory. Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4-6 Hz in left anterior temporal lobe (ATL, a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.

  10. The meaning of 'life' and other abstract words: Insights from neuropsychology.

    Science.gov (United States)

    Hoffman, Paul

    2016-09-01

    There are a number of long-standing theories on how the cognitive processing of abstract words, like 'life', differs from that of concrete words, like 'knife'. This review considers current perspectives on this debate, focusing particularly on insights obtained from patients with language disorders and integrating these with evidence from functional neuroimaging studies. The evidence supports three distinct and mutually compatible hypotheses. (1) Concrete and abstract words differ in their representational substrates, with concrete words depending particularly on sensory experiences and abstract words on linguistic, emotional, and magnitude-based information. Differential dependence on visual versus verbal experience is supported by the evidence for graded specialization in the anterior temporal lobes for concrete versus abstract words. In addition, concrete words have richer representations, in line with better processing of these words in most aphasic patients and, in particular, patients with semantic dementia. (2) Abstract words place greater demands on executive regulation processes because they have variable meanings that change with context. This theory explains abstract word impairments in patients with semantic-executive deficits and is supported by neuroimaging studies showing greater response to abstract words in inferior prefrontal cortex. (3) The relationships between concrete words are governed primarily by conceptual similarity, while those of abstract words depend on association to a greater degree. This theory, based primarily on interference and priming effects in aphasic patients, is the most recent to emerge and the least well understood. I present analyses indicating that patterns of lexical co-occurrence may be important in understanding these effects. © 2015 The Authors. Journal of Neuropsychology published by John Wiley & Sons Ltd on behalf of the British Psychological Society.

  11. Embodiment and second-language: automatic activation of motor responses during processing spatially associated L2 words and emotion L2 words in a vertical Stroop paradigm.

    Science.gov (United States)

    Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2014-05-01

    Converging evidence suggests that understanding our first-language (L1) results in reactivation of experiential sensorimotor traces in the brain. Surprisingly, little is known regarding the involvement of these processes during second-language (L2) processing. Participants saw L1 or L2 words referring to entities with a typical location (e.g., star, mole) (Experiment 1 & 2) or to an emotion (e.g., happy, sad) (Experiment 3). Participants responded to the words' ink color with an upward or downward arm movement. Despite word meaning being fully task-irrelevant, L2 automatically activated motor responses similar to L1 even when L2 was acquired rather late in life (age >11). Specifically, words such as star facilitated upward, and words such as root facilitated downward responses. Additionally, words referring to positive emotions facilitated upward, and words referring to negative emotions facilitated downward responses. In summary our study suggests that reactivation of experiential traces is not limited to L1 processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Consonant and Vowel Processing in Word Form Segmentation: An Infant ERP Study

    Directory of Open Access Journals (Sweden)

    Katie Von Holzen

    2018-01-01

    Full Text Available Segmentation skill and the preferential processing of consonants (C-bias develop during the second half of the first year of life and it has been proposed that these facilitate language acquisition. We used Event-related brain potentials (ERPs to investigate the neural bases of early word form segmentation, and of the early processing of onset consonants, medial vowels, and coda consonants, exploring how differences in these early skills might be related to later language outcomes. Our results with French-learning eight-month-old infants primarily support previous studies that found that the word familiarity effect in segmentation is developing from a positive to a negative polarity at this age. Although as a group infants exhibited an anterior-localized negative effect, inspection of individual results revealed that a majority of infants showed a negative-going response (Negative Responders, while a minority showed a positive-going response (Positive Responders. Furthermore, all infants demonstrated sensitivity to onset consonant mispronunciations, while Negative Responders demonstrated a lack of sensitivity to vowel mispronunciations, a developmental pattern similar to previous literature. Responses to coda consonant mispronunciations revealed neither sensitivity nor lack of sensitivity. We found that infants showing a more mature, negative response to newly segmented words compared to control words (evaluating segmentation skill and mispronunciations (evaluating phonological processing at test also had greater growth in word production over the second year of life than infants showing a more positive response. These results establish a relationship between early segmentation skills and phonological processing (not modulated by the type of mispronunciation and later lexical skills.

  13. Modality dependency of familiarity ratings of Japanese words.

    Science.gov (United States)

    Amano, S; Kondo, T; Kakehi, K

    1995-07-01

    Familiarity ratings for a large number of aurally and visually presented Japanese words wer measured for 11 subjects, in order to investigate the modality dependency of familiarity. The correlation coefficient between auditory and visual ratings was .808, which is lower than that observed for English words, suggesting that a substantial portion of the mental lexicon is modality dependent. It was shown that the modality dependency is greater for low-familiarity words than it is for medium- or high-familiarity words. This difference between the low- and the medium- or high-familiarity words has a relationship to orthography. That is, the dependency is larger in words consisting only of kanji, which may have multiple pronunciations and usually represent meaning, than it is in words consisting only of hiragana or katakana, which have a single pronunciation and usually do not represent meaning. These results indicate that the idiosyncratic characteristics of Japanese orthography contribute to the modality dependency.

  14. WORD LEVEL DISCRIMINATIVE TRAINING FOR HANDWRITTEN WORD RECOGNITION

    NARCIS (Netherlands)

    Chen, W.; Gader, P.

    2004-01-01

    Word level training refers to the process of learning the parameters of a word recognition system based on word level criteria functions. Previously, researchers trained lexicon­driven handwritten word recognition systems at the character level individually. These systems generally use statistical

  15. The Influence of Sex Information on Gender Word Processing.

    Science.gov (United States)

    Casado, Alba; Palma, Alfonso; Paolieri, Daniela

    2018-06-01

    Three different tasks (word repetition, lexical decision, and gender decision) were designed to explore the impact of the sex clues (sex of the speaker, sex of the addressee) and the type of gender (semantic, arbitrary) on the processing of isolated Spanish gendered words. The findings showed that the grammatical gender feature was accessed when no mandatory attentional focus was required. In addition, the results indicated that the participants organize information according to their own sex role, which provides more salience to the words that match in grammatical gender with their own sex role representation, even when the gender assignment is arbitrary. Finally, the sex of the speaker biased the lexical access and the grammatical gender selection, serving as a semantic prime when the two dimensions have a congruent relationship. Furthermore, the masculine form serves as the generic gender representing both male and female figures.

  16. Elevating Baseline Activation Does Not Facilitate Reading of Unattended Words

    Science.gov (United States)

    Lien, Mei-Ching; Kouchi, Scott; Ruthruff, Eric; Lachter, Joel B.

    2009-01-01

    Previous studies have disagreed the extent to which people extract meaning from words presented outside the focus of spatial attention. The present study, examined a possible explanation for such discrepancies, inspired by attenuation theory: unattended words can be read more automatically when they have a high baseline level of activation (e.g., due to frequent repetition or due to being expected in a given context). We presented a brief prime word in lowercase, followed by a target word in uppercase. Participants indicated whether the target word belonged to a particular category (e.g., "sport"). When we drew attention to the prime word using a visual cue, the prime produced substantial priming effects on target responses (i.e., faster responses when the prime and target words were identical or from the same category than when they belonged to different categories). When prime words were not attended, however, they produced no priming effects. This finding replicated even when there were only 4 words, each repeated 160 times during the experiment. Even with a very high baseline level of activation, it appears that very little word processing is possible without spatial attention.

  17. Processing emotional words in two languages with one brain: ERP and fMRI evidence from Chinese-English bilinguals.

    Science.gov (United States)

    Chen, Peiyao; Lin, Jie; Chen, Bingle; Lu, Chunming; Guo, Taomei

    2015-10-01

    Emotional words in a bilingual's second language (L2) seem to have less emotional impact compared to emotional words in the first language (L1). The present study examined the neural mechanisms of emotional word processing in Chinese-English bilinguals' two languages by using both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Behavioral results show a robust positive word processing advantage in L1 such that responses to positive words were faster and more accurate compared to responses to neutral words and negative words. In L2, emotional words only received higher accuracies than neutral words. In ERPs, positive words elicited a larger early posterior negativity and a smaller late positive component than neutral words in L1, while a trend of reduced N400 component was found for positive words compared to neutral words in L2. In fMRI, reduced activation was found for L1 emotional words in both the left middle occipital gyrus and the left cerebellum whereas increased activation in the left cerebellum was found for L2 emotional words. Altogether, these results suggest that emotional word processing advantage in L1 relies on rapid and automatic attention capture while facilitated semantic retrieval might help processing emotional words in L2. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Brain processing of task-relevant and task-irrelevant emotional words: an ERP study.

    Science.gov (United States)

    González-Villar, Alberto J; Triñanes, Yolanda; Zurrón, Montserrat; Carrillo-de-la-Peña, María T

    2014-09-01

    Although there is evidence for preferential perceptual processing of written emotional information, the effects of attentional manipulations and the time course of affective processing require further clarification. In this study, we attempted to investigate how the emotional content of words modulates cerebral functioning (event-related potentials, ERPs) and behavior (reaction times, RTs) when the content is task-irrelevant (emotional Stroop Task, EST) or task-relevant (emotional categorization task, ECT), in a sample of healthy middle-aged women. In the EST, the RTs were longer for emotional words than for neutral words, and in the ECT, they were longer for neutral and negative words than for positive words. A principal components analysis of the ERPs identified various temporospatial factors that were differentially modified by emotional content. P2 was the first emotion-sensitive component, with enhanced factor scores for negative nouns across tasks. The N2 and late positive complex had enhanced factor scores for emotional relative to neutral information only in the ECT. The results reinforce the idea that written emotional information has a preferential processing route, both when it is task-irrelevant (producing behavioral interference) and when it is task-relevant (facilitating the categorization). After early automatic processing of the emotional content, late ERPs become more emotionally modulated as the level of attention to the valence increases.

  19. Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2017-12-01

    Full Text Available An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an

  20. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words.

    Science.gov (United States)

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants' performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words.

  1. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Science.gov (United States)

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  2. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Directory of Open Access Journals (Sweden)

    Wei Ji Ma

    Full Text Available Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness, one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  3. The Low-Frequency Encoding Disadvantage: Word Frequency Affects Processing Demands

    OpenAIRE

    Diana, Rachel A.; Reder, Lynne M.

    2006-01-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in ad...

  4. The Process of Word Formation and Phrase Structure of Android Application Names

    OpenAIRE

    Handayani, Heny

    2013-01-01

    Android is an operating system for mobile device, such as smartphones and tablet computers that was developed by Google. In this era, android is a popular operating system that is searched by people because of necessary of information. The process and structure of android application names are interesting to be analyzed since they have different structure of words in general. The purpose research is to describe and explain which word formation processes and phrase structure that are commonly ...

  5. Mandarin-English Bilinguals Process Lexical Tones in Newly Learned Words in Accordance with the Language Context.

    Science.gov (United States)

    Quam, Carolyn; Creel, Sarah C

    2017-01-01

    Previous research has mainly considered the impact of tone-language experience on ability to discriminate linguistic pitch, but proficient bilingual listening requires differential processing of sound variation in each language context. Here, we ask whether Mandarin-English bilinguals, for whom pitch indicates word distinctions in one language but not the other, can process pitch differently in a Mandarin context vs. an English context. Across three eye-tracked word-learning experiments, results indicated that tone-intonation bilinguals process tone in accordance with the language context. In Experiment 1, 51 Mandarin-English bilinguals and 26 English speakers without tone experience were taught Mandarin-compatible novel words with tones. Mandarin-English bilinguals out-performed English speakers, and, for bilinguals, overall accuracy was correlated with Mandarin dominance. Experiment 2 taught 24 Mandarin-English bilinguals and 25 English speakers novel words with Mandarin-like tones, but English-like phonemes and phonotactics. The Mandarin-dominance advantages observed in Experiment 1 disappeared when words were English-like. Experiment 3 contrasted Mandarin-like vs. English-like words in a within-subjects design, providing even stronger evidence that bilinguals can process tone language-specifically. Bilinguals (N = 58), regardless of language dominance, attended more to tone than English speakers without Mandarin experience (N = 28), but only when words were Mandarin-like-not when they were English-like. Mandarin-English bilinguals thus tailor tone processing to the within-word language context.

  6. Levels of word processing and incidental memory: dissociable mechanisms in the temporal lobe.

    Science.gov (United States)

    Castillo, E M; Simos, P G; Davis, R N; Breier, J; Fitzgerald, M E; Papanicolaou, A C

    2001-11-16

    Word recall is facilitated when deep (e.g. semantic) processing is applied during encoding. This fact raises the question of the existence of specific brain mechanisms supporting different levels of information processing that can modulate incidental memory performance. In this study we obtained spatiotemporal brain activation profiles, using magnetic source imaging, from 10 adult volunteers as they performed a shallow (phonological) processing task and a deep (semantic) processing task. When phonological analysis of the word stimuli into their constituent phonemes was required, activation was largely restricted to the posterior portion of the left superior temporal gyrus (area 22). Conversely, when access to lexical/semantic representations was required, activation was found predominantly in the left middle temporal gyrus and medial temporal cortex. The differential engagement of each mechanism during word encoding was associated with dramatic changes in subsequent incidental memory performance.

  7. Letter position coding across modalities: braille and sighted reading of sentences with jumbled words.

    Science.gov (United States)

    Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo

    2015-04-01

    This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.

  8. Understanding geological processes: Visualization of rigid and non-rigid transformations

    Science.gov (United States)

    Shipley, T. F.; Atit, K.; Manduca, C. A.; Ormand, C. J.; Resnick, I.; Tikoff, B.

    2012-12-01

    Visualizations are used in the geological sciences to support reasoning about structures and events. Research in cognitive sciences offers insights into the range of skills of different users, and ultimately how visualizations might support different users. To understand the range of skills needed to reason about earth processes we have developed a program of research that is grounded in the geosciences' careful description of the spatial and spatiotemporal patterns associated with earth processes. In particular, we are pursuing a research program that identifies specific spatial skills and investigates whether and how they are related to each other. For this study, we focus on a specific question: Is there an important distinction in the geosciences between rigid and non-rigid deformation? To study a general spatial thinking skill we employed displays with non-geological objects that had been altered by rigid change (rotation), and two types of non-rigid change ("brittle" (or discontinuous) and "ductile" (or continuous) deformation). Disciplinary scientists (geosciences and chemistry faculty), and novices (non-science faculty and undergraduate psychology students) answered questions that required them to visualize the appearance of the object before the change. In one study, geologists and chemists were found to be superior to non-science faculty in reasoning about rigid rotations (e.g., what an object would look like from a different perspective). Geologists were superior to chemists in reasoning about brittle deformations (e.g., what an object looked like before it was broken - here the object was a word cut into many fragments displaced in different directions). This finding is consistent with two hypotheses: 1) Experts are good at visualizing the types of changes required for their domain; and 2) Visualization of rigid and non-rigid changes are not the same skill. An additional important finding is that there was a broad range of skill in both rigid and non

  9. Evolutionary relevance facilitates visual information processing.

    Science.gov (United States)

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  10. The Influence of Emotional Words on Sentence Processing: Electrophysiological and Behavioral Evidence

    Science.gov (United States)

    Martin-Loeches, Manuel; Fernandez, Anabel; Schacht, Annekathrin; Sommer, Werner; Casado, Pilar; Jimenez-Ortega, Laura; Fondevila, Sabela

    2012-01-01

    Whereas most previous studies on emotion in language have focussed on single words, we investigated the influence of the emotional valence of a word on the syntactic and semantic processes unfolding during sentence comprehension, by means of event-related brain potentials (ERP). Experiment 1 assessed how positive, negative, and neutral adjectives…

  11. Hemispheric Asymmetries in Semantic Processing: Evidence from False Memories for Ambiguous Words

    Science.gov (United States)

    Faust, Miriam; Ben-Artzi, Elisheva; Harel, Itay

    2008-01-01

    Previous research suggests that the left hemisphere (LH) focuses on strongly related word meanings; the right hemisphere (RH) may contribute uniquely to the processing of lexical ambiguity by activating and maintaining a wide range of meanings, including subordinate meanings. The present study used the word-lists false memory paradigm [Roediger,…

  12. The Neural Mechanisms of Word Order Processing Revisited: Electrophysiological Evidence from Japanese

    Science.gov (United States)

    Wolff, Susann; Schlesewsky, Matthias; Hirotani, Masako; Bornkessel-Schlesewsky, Ina

    2008-01-01

    We present two ERP studies on the processing of word order variations in Japanese, a language that is suited to shedding further light on the implications of word order freedom for neurocognitive approaches to sentence comprehension. Experiment 1 used auditory presentation and revealed that initial accusative objects elicit increased processing…

  13. Processing and Representation of Ambiguous Words in Chinese Reading: Evidence from Eye Movements.

    Science.gov (United States)

    Shen, Wei; Li, Xingshan

    2016-01-01

    In the current study, we used eye tracking to investigate whether senses of polysemous words and meanings of homonymous words are represented and processed similarly or differently in Chinese reading. Readers read sentences containing target words which was either homonymous words or polysemous words. The contexts of text preceding the target words were manipulated to bias the participants toward reading the ambiguous words according to their dominant, subordinate, or neutral meanings. Similarly, disambiguating regions following the target words were also manipulated to favor either the dominant or subordinate meanings of ambiguous words. The results showed that there were similar eye movement patterns when Chinese participants read sentences containing homonymous and polysemous words. The study also found that participants took longer to read the target word and the disambiguating text following it when the prior context and disambiguating regions favored divergent meanings rather than the same meaning. These results suggested that homonymy and polysemy are represented similarly in the mental lexicon when a particular meaning (sense) is fully specified by disambiguating information. Furthermore, multiple meanings (senses) are represented as separate entries in the mental lexicon.

  14. Preserved visual lexicosemantics in global aphasia: a right-hemisphere contribution?

    Science.gov (United States)

    Gold, B T; Kertesz, A

    2000-12-01

    Extensive testing of a patient, GP, who encountered large-scale destruction of left-hemisphere (LH) language regions was undertaken in order to address several issues concerning the ability of nonperisylvian areas to extract meaning from printed words. Testing revealed recognition of superordinate boundaries of animals, tools, vegetables, fruit, clothes, and furniture. GP was able to distinguish proper names from other nouns and from nonwords. GP was also able to differentiate words representing living things from those denoting nonliving things. The extent of LH infarct resulting in a global impairment to phonological and syntactic processing suggests LH specificity for these functions but considerable right-hemisphere (RH) participation in visual lexicosemantic processing. The relative preservation of visual lexicosemantic abilities despite severe impairment to all aspects of phonological coding demonstrates the importance of the direct route to the meaning of single printed words.

  15. Do preschool children learn to read words from environmental prints?

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    Full Text Available Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.

  16. Cognitive load effects on early visual perceptual processing.

    Science.gov (United States)

    Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia

    2018-05-01

    Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.

  17. Visualizing the process of process modeling with PPMCharts

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.T.P.; Pinggera, J.; Reijers, H.A.; Weber, B.; Poels, G.; La Rosa, M.; Soffer, P.

    2013-01-01

    In the quest for knowledge about how to make good process models, recent research focus is shifting from studying the quality of process models to studying the process of process modeling (often abbreviated as PPM) itself. This paper reports on our efforts to visualize this specific process in such

  18. What you say matters: exploring visual-verbal interactions in visual working memory.

    Science.gov (United States)

    Mate, Judit; Allen, Richard J; Baqués, Josep

    2012-01-01

    The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.

  19. The Use of Hospital Information Systems Data Base with Word Processing and Other Medical Records System Applications

    OpenAIRE

    Rusnak, James E.

    1982-01-01

    The approach frequently used to introduce computer technology into a hospital Medical Records Department is to implement a Word Processing System. Word processing is a form of computer system application that is intended to improve the department's productivity by improving the medical information transcription process. The effectiveness of the Word Processing System may be further enhanced by installing system facilities to provide access to data processing file information in the Hospital's...

  20. Spectrotemporal processing drives fast access to memory traces for spoken words.

    Science.gov (United States)

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Processing of emotion words by patients with autism spectrum disorders: evidence from reaction times and EEG.

    Science.gov (United States)

    Lartseva, Alina; Dijkstra, Ton; Kan, Cornelis C; Buitelaar, Jan K

    2014-11-01

    This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both groups showed faster responses to emotion words compared to neutral, suggesting intact early processing of emotion in ASD. In the ERPs, the control group showed a typical late positive component (LPC) at 400-600 ms for emotion words compared to neutral, while the ASD group showed no LPC. The between-group difference in LPC amplitude was significant, suggesting that emotion words were processed differently by individuals with ASD, although their behavioral performance was similar to that of typical individuals.

  2. Mandarin-English Bilinguals Process Lexical Tones in Newly Learned Words in Accordance with the Language Context

    Science.gov (United States)

    Quam, Carolyn; Creel, Sarah C.

    2017-01-01

    Previous research has mainly considered the impact of tone-language experience on ability to discriminate linguistic pitch, but proficient bilingual listening requires differential processing of sound variation in each language context. Here, we ask whether Mandarin-English bilinguals, for whom pitch indicates word distinctions in one language but not the other, can process pitch differently in a Mandarin context vs. an English context. Across three eye-tracked word-learning experiments, results indicated that tone-intonation bilinguals process tone in accordance with the language context. In Experiment 1, 51 Mandarin-English bilinguals and 26 English speakers without tone experience were taught Mandarin-compatible novel words with tones. Mandarin-English bilinguals out-performed English speakers, and, for bilinguals, overall accuracy was correlated with Mandarin dominance. Experiment 2 taught 24 Mandarin-English bilinguals and 25 English speakers novel words with Mandarin-like tones, but English-like phonemes and phonotactics. The Mandarin-dominance advantages observed in Experiment 1 disappeared when words were English-like. Experiment 3 contrasted Mandarin-like vs. English-like words in a within-subjects design, providing even stronger evidence that bilinguals can process tone language-specifically. Bilinguals (N = 58), regardless of language dominance, attended more to tone than English speakers without Mandarin experience (N = 28), but only when words were Mandarin-like—not when they were English-like. Mandarin-English bilinguals thus tailor tone processing to the within-word language context. PMID:28076400

  3. Evolutionary Relevance Facilitates Visual Information Processing

    Directory of Open Access Journals (Sweden)

    Russell E. Jackson

    2013-07-01

    Full Text Available Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  4. Is the motor system necessary for processing action and abstract emotion words? Evidence from focal brain lesions

    Directory of Open Access Journals (Sweden)

    Felix R. Dreyer

    2015-11-01

    Full Text Available Neuroimaging and neuropsychological experiments suggest that modality-preferential cortices, including motor- and somatosensory areas contribute to the semantic processing of action related concrete words. In contrast, a possible role of modality-preferential – including sensorimotor – areas in processing abstract meaning remains under debate. However, recent fMRI studies indicate an involvement of the left sensorimotor cortex in the processing of abstract-emotional words (e.g. love. But are these areas indeed necessary for processing action-related and abstract words? The current study now investigates word processing in two patients suffering from focal brain lesion in the left frontocentral motor system. A speeded lexical decision task (LDT on meticulously matched word groups showed that the recognition of nouns from different semantic categories – related to food, animals, tools and abstract-emotional concepts – was differentially affected. Whereas patient HS with a lesion in dorsolateral central sensorimotor cortex next to the hand area showed a category-specific deficit in recognizing tool words, patient CA suffering from lesion centered in the left SMA was primarily impaired in abstract-emotional word processing. These results point to a causal role of the motor cortex in the semantic processing of both action-related object concepts and abstract-emotional concepts and therefore suggest that the motor areas previously found active in action-related and abstract word processing can serve a meaning-specific necessary role in word recognition. The category-specific nature of the observed dissociations is difficult to reconcile with the idea that sensorimotor systems are somehow peripheral or ‘epiphenomenal’ to meaning and concept processing. Rather, our results are consistent with the claim that cognition is grounded in action and perception and based on distributed action perception circuits reaching into sensorimotor areas.

  5. The Impact of Metacognitive Strategies and Self-Regulating Processes of Solving Math Word Problems

    Science.gov (United States)

    Vula, Eda; Avdyli, Rrezarta; Berisha, Valbona; Saqipi, Blerim; Elezi, Shpetim

    2017-01-01

    This empirical study investigates the impact of metacognitive strategies and self-regulating processes in learners' achievement on solving math word problems. It specifically analyzes the impact of the linguistic factor and the number of steps and arithmetical operations that learners need to apply during the process of solving math word problems.…

  6. Levels-of-processing effect on frontotemporal function in schizophrenia during word encoding and recognition.

    Science.gov (United States)

    Ragland, J Daniel; Gur, Ruben C; Valdez, Jeffrey N; Loughead, James; Elliott, Mark; Kohler, Christian; Kanes, Stephen; Siegel, Steven J; Moelter, Stephen T; Gur, Raquel E

    2005-10-01

    Patients with schizophrenia improve episodic memory accuracy when given organizational strategies through levels-of-processing paradigms. This study tested if improvement is accompanied by normalized frontotemporal function. Event-related blood-oxygen-level-dependent functional magnetic resonance imaging (fMRI) was used to measure activation during shallow (perceptual) and deep (semantic) word encoding and recognition in 14 patients with schizophrenia and 14 healthy comparison subjects. Despite slower and less accurate overall word classification, the patients showed normal levels-of-processing effects, with faster and more accurate recognition of deeply processed words. These effects were accompanied by left ventrolateral prefrontal activation during encoding in both groups, although the thalamus, hippocampus, and lingual gyrus were overactivated in the patients. During word recognition, the patients showed overactivation in the left frontal pole and had a less robust right prefrontal response. Evidence of normal levels-of-processing effects and left prefrontal activation suggests that patients with schizophrenia can form and maintain semantic representations when they are provided with organizational cues and can improve their word encoding and retrieval. Areas of overactivation suggest residual inefficiencies. Nevertheless, the effect of teaching organizational strategies on episodic memory and brain function is a worthwhile topic for future interventional studies.

  7. The Limited Impact of Exposure Duration on Holistic Word Processing.

    Science.gov (United States)

    Chen, Changming; Abbasi, Najam Ul Hasan; Song, Shuang; Chen, Jie; Li, Hong

    2016-01-01

    The current study explored the impact of stimuli exposure duration on holistic word processing measured by the complete composite paradigm (CPc paradigm). The participants were asked to match the cued target parts of two characters which were presented for either a long (600 ms) or a short duration (170 ms). They were also tested by two popular versions of the CPc paradigm: the "early-fixed" task where the attention cue was visible from the beginning of each trial at a fixed position, and the "delayed-random" task where the cue showed up after the study character at random locations. The holistic word effect, as indexed by the alignment × congruency interaction, was identified in both tasks and was unaffected by the stimuli duration in both tasks. Meanwhile, the "delayed-random" task did not bring about larger holistic word effect than the "early-fixed" task. These results suggest the exposure duration (from around 150 to 600 ms) has a limited impact on the holistic word effect, and have methodological implications for experiment designs in this field.

  8. Unfolding Visual Lexical Decision in Time

    Science.gov (United States)

    Barca, Laura; Pezzulo, Giovanni

    2012-01-01

    Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419

  9. Word Spelling Assessment Using ICT: The Effect of Presentation Modality

    Science.gov (United States)

    Sarris, Menelaos; Panagiotakopoulos, Chris

    2010-01-01

    Up-to-date spelling process was assessed using typical spelling-to-dictation tasks, where children's performance was evaluated mainly in terms of spelling error scores. In the present work a simple graphical computer interface is reported, aiming to investigate the effects of input modality (e.g. visual and verbal) in word spelling. The software…

  10. Dynamic versus Static Dictionary with and without Printed Focal Words in e-Book Reading as Facilitator for Word Learning

    Science.gov (United States)

    Korat, Ofra; Levin, Iris; Ben-Shabt, Anat; Shneor, Dafna; Bokovza, Limor

    2014-01-01

    We investigated the extent to which a dictionary embedded in an e-book with static or dynamic visuals with and without printed focal words affects word learning. A pretest-posttest design was used to measure gains of expressive words' meaning and their spelling. The participants included 250 Hebrew-speaking second graders from…

  11. Context predicts word order processing in Broca's region.

    Science.gov (United States)

    Kristensen, Line Burholt; Engberg-Pedersen, Elisabeth; Wallentin, Mikkel

    2014-12-01

    The function of the left inferior frontal gyrus (L-IFG) is highly disputed. A number of language processing studies have linked the region to the processing of syntactical structure. Still, there is little agreement when it comes to defining why linguistic structures differ in their effects on the L-IFG. In a number of languages, the processing of object-initial sentences affects the L-IFG more than the processing of subject-initial ones, but frequency and distribution differences may act as confounding variables. Syntactically complex structures (like the object-initial construction in Danish) are often less frequent and only viable in certain contexts. With this confound in mind, the L-IFG activation may be sensitive to other variables than a syntax manipulation on its own. The present fMRI study investigates the effect of a pragmatically appropriate context on the processing of subject-initial and object-initial clauses with the IFG as our ROI. We find that Danish object-initial clauses yield a higher BOLD response in L-IFG, but we also find an interaction between appropriateness of context and word order. This interaction overlaps with traditional syntax areas in the IFG. For object-initial clauses, the effect of an appropriate context is bigger than for subject-initial clauses. This result is supported by an acceptability study that shows that, given appropriate contexts, object-initial clauses are considered more appropriate than subject-initial clauses. The increased L-IFG activation for processing object-initial clauses without a supportive context may be interpreted as reflecting either reinterpretation or the recipients' failure to correctly predict word order from contextual cues.

  12. Facilitation and inhibition of visual display search processes through use of colour

    NARCIS (Netherlands)

    Nes, van F.L.; Juola, J.F.; Moonen, R.J.A.M.

    1987-01-01

    The effect of colour differences on visual search of videotex displays has been investigated in several experiments, including one with accurate measurements of eye movements. Subjects had to search for specific target words on display pages with normal text in one, two or four colours. The

  13. [Effect of concreteness of target words on verbal working memory: an evaluation using Japanese version of reading span test].

    Science.gov (United States)

    Kondo, H; Osaka, N

    2000-04-01

    Effects of concreteness and representation mode (kanji/hiragana) of target words on working memory during reading was tested using Japanese version of reading span test (RST), developed by Osaka and Osaka (1994). Concreteness and familiarity of target words and difficulty of sentences were carefully controlled. The words with high concreteness resulted in significantly higher RST scores, which suggests the high efficiency of working memory in processing these words. The results suggest that high concrete noun-words associated with visual clues consume less working memory capacity during reading. The effect of representation mode is different between subjects with high-RST and low-RST scores. Characteristic of the high concrete words that may be responsible for the effectiveness of processing are discussed.

  14. Independence of early speech processing from word meaning.

    Science.gov (United States)

    Travis, Katherine E; Leonard, Matthew K; Chan, Alexander M; Torres, Christina; Sizemore, Marisa L; Qu, Zhe; Eskandar, Emad; Dale, Anders M; Elman, Jeffrey L; Cash, Sydney S; Halgren, Eric

    2013-10-01

    We combined magnetoencephalography (MEG) with magnetic resonance imaging and electrocorticography to separate in anatomy and latency 2 fundamental stages underlying speech comprehension. The first acoustic-phonetic stage is selective for words relative to control stimuli individually matched on acoustic properties. It begins ∼60 ms after stimulus onset and is localized to middle superior temporal cortex. It was replicated in another experiment, but is strongly dissociated from the response to tones in the same subjects. Within the same task, semantic priming of the same words by a related picture modulates cortical processing in a broader network, but this does not begin until ∼217 ms. The earlier onset of acoustic-phonetic processing compared with lexico-semantic modulation was significant in each individual subject. The MEG source estimates were confirmed with intracranial local field potential and high gamma power responses acquired in 2 additional subjects performing the same task. These recordings further identified sites within superior temporal cortex that responded only to the acoustic-phonetic contrast at short latencies, or the lexico-semantic at long. The independence of the early acoustic-phonetic response from semantic context suggests a limited role for lexical feedback in early speech perception.

  15. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants.

    Science.gov (United States)

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas

    2017-01-01

    Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.

  16. Co-lateralized bilingual mechanisms for reading in single and dual language contexts: evidence from visual half-field processing of action words in proficient bilinguals

    Directory of Open Access Journals (Sweden)

    Marlena eKrefta

    2015-08-01

    Full Text Available When reading, proficient bilinguals seem to engage the same cognitive circuits regardless of the language in use. Yet, whether or not such ‘bilingual’ mechanisms would be lateralized in the same way in distinct – single or dual – language contexts is a question for debate. To fill this gap, we tested 18 highly proficient Polish (L1 – English (L2 childhood bilinguals whose task was to read aloud one of the two laterally presented action verbs, one stimulus per visual half field. While in the single-language blocks only L1 or L2 words were shown, in the subsequent mixed-language blocks words from both languages were concurrently displayed. All stimuli were presented for 217 ms followed by masks in which letters were replaced with hash marks. Since in non-simultaneous bilinguals the control of language, skilled actions (including reading, and representations of action concepts are typically left lateralized, the vast majority of our participants showed the expected, significant right visual field advantage for L1 and L2, both for accuracy and response times. The observed effects were nevertheless associated with substantial variability in the strength of the lateralization of the mechanisms involved. Moreover, although it could be predicted that participants’ performance should be better in a single-language context, accuracy was significantly higher and response times were significantly shorter in a dual-language context, irrespective of the language tested. Finally, for both accuracy and response times, there were significant positive correlations between the laterality indices (LIs of both languages independent of the context, with a significantly greater left-sided advantage for L1 vs. L2 in the mixed-language blocks, based on LIs calculated for response times. Thus, despite similar representations of the two languages in the bilingual brain, these results also point to the functional separation of L1 and L2 in the dual

  17. Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.

    Science.gov (United States)

    Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly

    2012-05-01

    Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas

  18. Affective processing within 1/10th of a second: High arousal is necessary for early facilitative processing of negative but not positive words.

    Science.gov (United States)

    Hofmann, Markus J; Kuchinke, Lars; Tamm, Sascha; Võ, Melissa L-H; Jacobs, Arthur M

    2009-12-01

    Lexical decisions to high- and low-arousal negative words and to low-arousal neutral and positive words were examined in an event-related potentials (ERP) study. Reaction times to positive and high-arousal negative words were shorter than those to neutral (low-arousal) words, whereas those to low-arousal negative words were longer. A similar pattern was observed in an early time window of the ERP response: Both positive and high-arousal negative words elicited greater negative potentials in a time frame of 80 to 120 msec after stimulus onset. This result suggests that arousal has a differential impact on early lexical processing of positive and negative words. Source localization in the relevant time frame revealed that the arousal effect in negative words is likely to be localized in a left occipito-temporal region including the middle temporal and fusiform gyri. The ERP arousal effect appears to result from early lexico-semantic processing in high-arousal negative words.

  19. Visual form-processing deficits: a global clinical classification.

    Science.gov (United States)

    Unzueta-Arce, J; García-García, R; Ladera-Fernández, V; Perea-Bartolomé, M V; Mora-Simón, S; Cacho-Gutiérrez, J

    2014-10-01

    Patients who have difficulties recognising visual form stimuli are usually labelled as having visual agnosia. However, recent studies let us identify different clinical manifestations corresponding to discrete diagnostic entities which reflect a variety of deficits along the continuum of cortical visual processing. We reviewed different clinical cases published in medical literature as well as proposals for classifying deficits in order to provide a global perspective of the subject. Here, we present the main findings on the neuroanatomical basis of visual form processing and discuss the criteria for evaluating processing which may be abnormal. We also include an inclusive diagram of visual form processing deficits which represents the different clinical cases described in the literature. Lastly, we propose a boosted decision tree to serve as a guide in the process of diagnosing such cases. Although the medical community largely agrees on which cortical areas and neuronal circuits are involved in visual processing, future studies making use of new functional neuroimaging techniques will provide more in-depth information. A well-structured and exhaustive assessment of the different stages of visual processing, designed with a global view of the deficit in mind, will give a better idea of the prognosis and serve as a basis for planning personalised psychostimulation and rehabilitation strategies. Copyright © 2011 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  20. The way you say it, the way I feel it: emotional word processing in accented speech

    Science.gov (United States)

    Hatzidaki, Anna; Baus, Cristina; Costa, Albert

    2015-01-01

    The present study examined whether processing words with affective connotations in a listener's native language may be modulated by accented speech. To address this question, we used the Event Related Potential (ERP) technique and recorded the cerebral activity of Spanish native listeners, who performed a semantic categorization task, while listening to positive, negative and neutral words produced in standard Spanish or in four foreign accents. The behavioral results yielded longer latencies for emotional than for neutral words in both native and foreign-accented speech, with no difference between positive and negative words. The electrophysiological results replicated previous findings from the emotional language literature, with the amplitude of the Late Positive Complex (LPC), associated with emotional language processing, being larger (more positive) for emotional than for neutral words at posterior scalp sites. Interestingly, foreign-accented speech was found to interfere with the processing of positive valence and go along with a negativity bias, possibly suggesting heightened attention to negative words. The manipulation employed in the present study provides an interesting perspective on the effects of accented speech on processing affective-laden information. It shows that higher order semantic processes that involve emotion-related aspects are sensitive to a speaker's accent. PMID:25870577

  1. Student’s thinking process in solving word problems in geometry

    Science.gov (United States)

    Khasanah, V. N.; Usodo, B.; Subanti, S.

    2018-05-01

    This research aims to find out the thinking process of seventh grade of Junior High School in solve word problem solving of geometry. This research was descriptive qualitative research. The subject of the research was selected based on sex and differences in mathematical ability. Data collection was done based on student’s work test, interview, and observation. The result of the research showed that there was no difference of thinking process between male and female with high mathematical ability, and there were differences of thinking process between male and female with moderate and low mathematical ability. Also, it was found that male with moderate mathematical ability took a long time in the step of making problem solving plans. While female with moderate mathematical ability took a long time in the step of understanding the problems. The importance of knowing the thinking process of students in solving word problem solving were that the teacher knows the difficulties faced by students and to minimize the occurrence of the same error in problem solving. Teacher could prepare the right learning strategies which more appropriate with student’s thinking process.

  2. From Word Alignment to Word Senses, via Multilingual Wordnets

    Directory of Open Access Journals (Sweden)

    Dan Tufis

    2006-05-01

    Full Text Available Most of the successful commercial applications in language processing (text and/or speech dispense with any explicit concern on semantics, with the usual motivations stemming from the computational high costs required for dealing with semantics, in case of large volumes of data. With recent advances in corpus linguistics and statistical-based methods in NLP, revealing useful semantic features of linguistic data is becoming cheaper and cheaper and the accuracy of this process is steadily improving. Lately, there seems to be a growing acceptance of the idea that multilingual lexical ontologisms might be the key towards aligning different views on the semantic atomic units to be used in characterizing the general meaning of various and multilingual documents. Depending on the granularity at which semantic distinctions are necessary, the accuracy of the basic semantic processing (such as word sense disambiguation can be very high with relatively low complexity computing. The paper substantiates this statement by presenting a statistical/based system for word alignment and word sense disambiguation in parallel corpora. We describe a word alignment platform which ensures text pre-processing (tokenization, POS-tagging, lemmatization, chunking, sentence and word alignment as required by an accurate word sense disambiguation.

  3. Processing Visual Images

    International Nuclear Information System (INIS)

    Litke, Alan

    2006-01-01

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  4. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  5. Reading visual braille with a retinal prosthesis.

    Science.gov (United States)

    Lauritzen, Thomas Z; Harris, Jordan; Mohand-Said, Saddek; Sahel, Jose A; Dorn, Jessy D; McClure, Kelly; Greenberg, Robert J

    2012-01-01

    Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2-4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.

  6. Patterns and Meanings of English Words through Word Formation Processes of Acronyms, Clipping, Compound and Blending Found in Internet-Based Media

    Directory of Open Access Journals (Sweden)

    Rio Rini Diah Moehkardi

    2017-02-01

    Full Text Available This research aims to explore the word-formation process in English new words found in the internet-based media through acronym, compound,  clipping and blending and their meanings. This study applies Plag’s (2002 framework of acronym and compound; Jamet’s (2009 framework of clipping, and Algeo’s framework (1977 in Hosseinzadeh  (2014 for blending. Despite the  formula established in each respective framework,  there could be occurrences  of novelty and modification on how words are formed and  how meaning developed in  the newly formed words. The research shows that well accepted acronyms can become real words by taking lower case and affixation. Some acronyms initialized non-lexical words, used non initial letters, and used letters and numbers that pronounced the same with the words they represent. Compounding also includes numbers as the element member of the compound. The nominal nouns are likely to have metaphorical and idiomatic meanings. Some compounds evolve to new and more specific meaning. The study also finds that back-clipping is the most dominant clipping. In blending, the sub-category clipping of blending, the study finds out that when clipping takes place, the non-head element is back-clipped and the head is fore-clipped.

  7. Deployment of spatial attention to words in central and peripheral vision.

    Science.gov (United States)

    Ducrot, Stéphanie; Grainger, Jonathan

    2007-05-01

    Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.

  8. Word deafness in Wernicke's aphasia.

    OpenAIRE

    Kirshner, H S; Webb, W G; Duncan, G W

    1981-01-01

    Three patients with otherwise typical Wernicke's aphasia showed consistent superiority of visual over auditory comprehension. The precedents for and anatomical basis of a selective auditory deficit in Wernicke's aphasia are discussed, including the relationship to pure word deafness. One implication of spared visual language function may be the use of gesture in language therapy for such patients.

  9. Sex differences in brain activation patterns during processing of positively and negatively valenced emotional words.

    Science.gov (United States)

    Hofer, Alex; Siedentopf, Christian M; Ischebeck, Anja; Rettenbacher, Maria A; Verius, Michael; Felber, Stephan; Wolfgang Fleischhacker, W

    2007-01-01

    Previous studies have suggested that men and women process emotional stimuli differently. In this study, we used event-related functional magnetic resonance imaging (fMRI) to investigate gender differences in regional cerebral activity during the perception of positive or negative emotions. The experiment comprised two emotional conditions (positively/negatively valenced words) during which fMRI data were acquired. Thirty-eight healthy volunteers (19 males, 19 females) were investigated. A direct comparison of brain activation between men and women revealed differential activation in the right putamen, the right superior temporal gyrus, and the left supramarginal gyrus during processing of positively valenced words versus non-words for women versus men. By contrast, during processing of negatively valenced words versus non-words, relatively greater activation was seen in the left perirhinal cortex and hippocampus for women versus men, and in the right supramarginal gyrus for men versus women. Our findings suggest gender-related neural responses to emotional stimuli and could contribute to the understanding of mechanisms underlying the gender disparity of neuropsychiatric diseases such as mood disorders.

  10. Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach

    OpenAIRE

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy

    2012-01-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual varia...

  11. Visual attention in posterior stroke

    DEFF Research Database (Denmark)

    Fabricius, Charlotte; Petersen, Anders; Iversen, Helle K

    Objective: Impaired visual attention is common following strokes in the territory of the middle cerebral artery, particularly in the right hemisphere. However, attentional effects of more posterior lesions are less clear. The aim of this study was to characterize visual processing speed...... and apprehension span following posterior cerebral artery (PCA) stroke. We also relate these attentional parameters to visual word recognition, as previous studies have suggested that reduced visual speed and span may explain pure alexia. Methods: Nine patients with MR-verified focal lesions in the PCA......-territory (four left PCA; four right PCA; one bilateral, all >1 year post stroke) were compared to 25 controls using single case statistics. Visual attention was characterized by a whole report paradigm allowing for hemifield-specific speed and span measurements. We also characterized visual field defects...

  12. ERP signatures of conscious and unconscious word and letter perception in an inattentional blindness paradigm.

    Science.gov (United States)

    Schelonka, Kathryn; Graulty, Christian; Canseco-Gonzalez, Enriqueta; Pitts, Michael A

    2017-09-01

    A three-phase inattentional blindness paradigm was combined with ERPs. While participants performed a distracter task, line segments in the background formed words or consonant-strings. Nearly half of the participants failed to notice these word-forms and were deemed inattentionally blind. All participants noticed the word-forms in phase 2 of the experiment while they performed the same distracter task. In the final phase, participants performed a task on the word-forms. In all phases, including during inattentional blindness, word-forms elicited distinct ERPs during early latencies (∼200-280ms) suggesting unconscious orthographic processing. A subsequent ERP (∼320-380ms) similar to the visual awareness negativity appeared only when subjects were aware of the word-forms, regardless of the task. Finally, word-forms elicited a P3b (∼400-550ms) only when these stimuli were task-relevant. These results are consistent with previous inattentional blindness studies and help distinguish brain activity associated with pre- and post-perceptual processing from correlates of conscious perception. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited

    Directory of Open Access Journals (Sweden)

    Łukasz Dębowski

    2018-01-01

    Full Text Available As we discuss, a stationary stochastic process is nonergodic when a random persistent topic can be detected in the infinite random text sampled from the process, whereas we call the process strongly nonergodic when an infinite sequence of independent random bits, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we adapt this property back to ergodic processes. Subsequently, we call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. We present a simple example of such a process. Moreover, we demonstrate an assertion which we call the theorem about facts and words. This proposition states that the number of probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the Prediction by Partial Matching (PPM compression algorithm. We also observe that the number of the word-like strings for a sample of plays by Shakespeare follows an empirical stepwise power law, in a stark contrast to Markov processes. Hence, we suppose that natural language considered as a process is not only non-Markov but also perigraphic.

  14. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    Science.gov (United States)

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  15. The role of visual representations within working memory for paired-associate and serial order of spoken words.

    Science.gov (United States)

    Ueno, Taiji; Saito, Satoru

    2013-09-01

    Caplan and colleagues have recently explained paired-associate learning and serial-order learning with a single-mechanism computational model by assuming differential degrees of isolation. Specifically, two items in a pair can be grouped together and associated to positional codes that are somewhat isolated from the rest of the items. In contrast, the degree of isolation among the studied items is lower in serial-order learning. One of the key predictions drawn from this theory is that any variables that help chunking of two adjacent items into a group should be beneficial to paired-associate learning, more than serial-order learning. To test this idea, the role of visual representations in memory for spoken verbal materials (i.e., imagery) was compared between two types of learning directly. Experiment 1 showed stronger effects of word concreteness and of concurrent presentation of irrelevant visual stimuli (dynamic visual noise: DVN) in paired-associate memory than in serial-order memory, consistent with the prediction. Experiment 2 revealed that the irrelevant visual stimuli effect was boosted when the participants had to actively maintain the information within working memory, rather than feed it to long-term memory for subsequent recall, due to cue overloading. This indicates that the sensory input from irrelevant visual stimuli can reach and affect visual representations of verbal items within working memory, and that this disruption can be attenuated when the information within working memory can be efficiently supported by long-term memory for subsequent recall.

  16. Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children

    Directory of Open Access Journals (Sweden)

    Mélanie Havy

    2017-12-01

    Full Text Available From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face. Another group experienced the words in the visual modality (seeing a silent talking face. Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.

  17. Bilingual visual word recognition and lexical access

    NARCIS (Netherlands)

    Dijkstra, A.F.J.; Kroll, J.F.; Groot, A.M.B. de

    2005-01-01

    In spite of the intuition of many bilinguals, a review of empirical studies indicates that during reading under many circumstances, possible words from different languages temporarily become active. Such evidence for "language non-selective lexical access" is found using stimulus materials of

  18. P2-13: Location word Cues' Effect on Location Discrimination Task: Cross-Modal Study

    Directory of Open Access Journals (Sweden)

    Satoko Ohtsuka

    2012-10-01

    Full Text Available As is well known, participants are slower and make more errors in responding to the display color of an incongruent color word than a congruent one. This traditional stroop effect is often accounted for with relatively automatic and dominant word processing. Although the word dominance account has been widely supported, it is not clear in what extent of perceptual tasks it is valid. Here we aimed to examine whether the word dominance effect is observed in location stroop tasks and in audio-visual situations. The participants were required to press a key according to the location of visual (Experiment 1 and audio (Experiment 2 targets, left or right, as soon as possible. A cue of written (Experiments 1a and 2a or spoken (Experiments 1b and 2b location words, “left” or “right”, was presented on the left or right side of the fixation with cue lead times (CLT of 200 ms and 1200 ms. Reaction time from target presentation to key press was recorded as a dependent variable. The results were that the location validity effect was marked in within-modality but less so in cross-modality trials. The word validity effect was strong in within- but not in cross-modality trials. The CLT gave some effect of inhibition of return. So the word dominance could be less effective in location tasks and in cross-modal situations. The spatial correspondence seems to overcome the word effect.

  19. The (lack of) effect of dynamic visual noise on the concreteness effect in short-term memory.

    Science.gov (United States)

    Castellà, Judit; Campoy, Guillermo

    2018-05-17

    It has been suggested that the concreteness effect in short-term memory (STM) is a consequence of concrete words having more distinctive and richer semantic representations. The generation and storage of visual codes in STM could also play a crucial role on the effect because concrete words are more imaginable than abstract words. If this were the case, the introduction of a visual interference task would be expected to disrupt recall of concrete words. A Dynamic Visual Noise (DVN) display, which has been proven to eliminate the concreteness effect on long-term memory (LTM), was presented along encoding of concrete and abstract words in a STM serial recall task. Results showed a main effect of word type, with more item errors in abstract words, a main effect of DVN, which impaired global performance due to more order errors, but no interaction, suggesting that DVN did not have any impact on the concreteness effect. These findings are discussed in terms of LTM participation through redintegration processes and in terms of the language-based models of verbal STM.

  20. Affective Congruence between Sound and Meaning of Words Facilitates Semantic Decision.

    Science.gov (United States)

    Aryani, Arash; Jacobs, Arthur M

    2018-05-31

    A similarity between the form and meaning of a word (i.e., iconicity) may help language users to more readily access its meaning through direct form-meaning mapping. Previous work has supported this view by providing empirical evidence for this facilitatory effect in sign language, as well as for onomatopoetic words (e.g., cuckoo) and ideophones (e.g., zigzag). Thus, it remains largely unknown whether the beneficial role of iconicity in making semantic decisions can be considered a general feature in spoken language applying also to "ordinary" words in the lexicon. By capitalizing on the affective domain, and in particular arousal, we organized words in two distinctive groups of iconic vs. non-iconic based on the congruence vs. incongruence of their lexical (meaning) and sublexical (sound) arousal. In a two-alternative forced choice task, we asked participants to evaluate the arousal of printed words that were lexically either high or low arousing. In line with our hypothesis, iconic words were evaluated more quickly and more accurately than their non-iconic counterparts. These results indicate a processing advantage for iconic words, suggesting that language users are sensitive to sound-meaning mappings even when words are presented visually and read silently.

  1. Strange Words: Autistic Traits and the Processing of Non-Literal Language.

    Science.gov (United States)

    McKenna, Peter E; Glass, Alexandra; Rajendran, Gnanathusharan; Corley, Martin

    2015-11-01

    Previous investigations into metonymy comprehension in ASD have confounded metonymy with anaphora, and outcome with process. Here we show how these confounds may be avoided, using data from non-diagnosed participants classified using Autism Quotient. Participants read sentences containing target words with novel or established metonymic senses (e.g., Finland, Vietnam) in literal- or figurative-supporting contexts. Participants took longer to read target words in figurative contexts, especially where the metonymic sense was novel. Importantly, participants with higher AQs took longer still to read novel metonyms. This suggests a focus for further exploration, in terms of potential differences between individuals diagnosed with ASD and their neurotypical counterparts, and more generally in terms of the processes by which comprehension is achieved.

  2. Examining the Relationship between Letter Processing and Word Processing Skills in Deaf and Hearing Readers

    Science.gov (United States)

    Guldenoglu, Birkan; Miller, Paul; Kargin, Tevhide

    2014-01-01

    The present study aimed to examine the relationship between letter processing and word processing skills in deaf and hearing readers. The participants were 105 students (51 of them hearing, 54 of them deaf) who were evenly and randomly recruited from two levels of education (primary = 3rd-4th graders; middle = 6th-7th graders). The students were…

  3. Attentional Processing and Recall of Emotional Words

    OpenAIRE

    Fraga Carou, Isabel; Redondo, Jaime; Piñeiro, Ana; Padrón, Isabel; Fernández-Rey, José; Alcaraz, Miguel

    2011-01-01

    Three experiments were carried out in order to evaluate the attention paid to words of different emotional value. A dual-task experimental paradigm was employed, registering response times to acoustic tones which were presented during the reading of words. The recall was also evaluated by means of an intentional immediate recall test. The results reveal that neither the emotional valence nor the arousal of words on their own affected the attention paid by participants. Only in the third exper...

  4. The impact of metacognitive strategies and self-regulating processes of solving math word problems

    OpenAIRE

    Eda Vula; Rrezarta Avdyli; Valbona Berisha; Blerim Saqipi; Shpetim Elezi

    2017-01-01

    This empirical study investigates the impact of metacognitive strategies and self-regulating processes in learners’ achievement on solving math word problems. It specifically analyzes the impact of the linguistic factor and the number of steps and arithmetical operations that learners need to apply during the process of solving math word problems. Two hundred sixty-three learners, of three classes of third graders (N=130) and four classes of fifth ...

  5. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants.

    Directory of Open Access Journals (Sweden)

    Christian Stephan-Otto

    Full Text Available Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities.A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted.Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures.The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.

  6. Affective priming effects of musical sounds on the processing of word meaning.

    Science.gov (United States)

    Steinbeis, Nikolaus; Koelsch, Stefan

    2011-03-01

    Recent studies have shown that music is capable of conveying semantically meaningful concepts. Several questions have subsequently arisen particularly with regard to the precise mechanisms underlying the communication of musical meaning as well as the role of specific musical features. The present article reports three studies investigating the role of affect expressed by various musical features in priming subsequent word processing at the semantic level. By means of an affective priming paradigm, it was shown that both musically trained and untrained participants evaluated emotional words congruous to the affect expressed by a preceding chord faster than words incongruous to the preceding chord. This behavioral effect was accompanied by an N400, an ERP typically linked with semantic processing, which was specifically modulated by the (mis)match between the prime and the target. This finding was shown for the musical parameter of consonance/dissonance (Experiment 1) and then extended to mode (major/minor) (Experiment 2) and timbre (Experiment 3). Seeing that the N400 is taken to reflect the processing of meaning, the present findings suggest that the emotional expression of single musical features is understood by listeners as such and is probably processed on a level akin to other affective communications (i.e., prosody or vocalizations) because it interferes with subsequent semantic processing. There were no group differences, suggesting that musical expertise does not have an influence on the processing of emotional expression in music and its semantic connotations.

  7. Processing of Words Related to the Demands of a Previously Solved Problem

    Directory of Open Access Journals (Sweden)

    Kowalczyk Marek

    2014-06-01

    Full Text Available Earlier research by the author brought about findings suggesting that people in a special way process words related to demands of a problem they previously solved, even when they do not consciously notice this relationship. The findings concerned interference in the task in which the words appeared, a shift in affective responses to them that depended on sex of the participants, and impaired memory of the words. The aim of this study was to replicate these effects and to find out whether they are related to working memory (WM span of the participants, taken as a measure of the individual’s ability to control attention. Participants in the experimental group solved a divergent problem, then performed an ostensibly unrelated speeded affective classification task concerning each of a series of nouns, and then performed an unexpected cued recall task for the nouns. Afterwards, a task measuring WM span was administered. In the control group there was no problem-solving phase. Response latencies for words immediately following problem-related words in the classification task were longer in the experimental than in the control group, but there was no relationship between this effect and WM span. Solving the problem, in interaction with sex of the participants and, independently, with their WM span, influenced affective responses to problem-related words. Recall of these words, however, was not impaired in the experimental group.

  8. An fMRI study of concreteness effects in spoken word recognition.

    Science.gov (United States)

    Roxbury, Tracy; McMahon, Katie; Copland, David A

    2014-09-30

    Evidence for the brain mechanisms recruited when processing concrete versus abstract concepts has been largely derived from studies employing visual stimuli. The tasks and baseline contrasts used have also involved varying degrees of lexical processing. This study investigated the neural basis of the concreteness effect during spoken word recognition and employed a lexical decision task with a novel pseudoword condition. The participants were seventeen healthy young adults (9 females). The stimuli consisted of (a) concrete, high imageability nouns, (b) abstract, low imageability nouns and (c) opaque legal pseudowords presented in a pseudorandomised, event-related design. Activation for the concrete, abstract and pseudoword conditions was analysed using anatomical regions of interest derived from previous findings of concrete and abstract word processing. Behaviourally, lexical decision reaction times for the concrete condition were significantly faster than both abstract and pseudoword conditions and the abstract condition was significantly faster than the pseudoword condition (p word recognition. Significant activity was also elicited by concrete words relative to pseudowords in the left fusiform and left anterior middle temporal gyrus. These findings confirm the involvement of a widely distributed network of brain regions that are activated in response to the spoken recognition of concrete but not abstract words. Our findings are consistent with the proposal that distinct brain regions are engaged as convergence zones and enable the binding of supramodal input.

  9. Visual memory and visual mental imagery recruit common control and sensory regions of the brain.

    Science.gov (United States)

    Slotnick, Scott D; Thompson, William L; Kosslyn, Stephen M

    2012-01-01

    Separate lines of research have shown that visual memory and visual mental imagery are mediated by frontal-parietal control regions and can rely on occipital-temporal sensory regions of the brain. We used fMRI to assess the degree to which visual memory and visual mental imagery rely on the same neural substrates. During the familiarization/study phase, participants studied drawings of objects. During the test phase, words corresponding to old and new objects were presented. In the memory test, participants responded "remember," "know," or "new." In the imagery test, participants responded "high vividness," "moderate vividness," or "low vividness." Visual memory (old-remember) and visual imagery (old-high vividness) were commonly associated with activity in frontal-parietal control regions and occipital-temporal sensory regions. In addition, visual memory produced greater activity than visual imagery in parietal and occipital-temporal regions. The present results suggest that visual memory and visual imagery rely on highly similar--but not identical--cognitive processes.

  10. Acquiring Orthographic Processing through Word Reading: Evidence from Children Learning to Read French and English

    Science.gov (United States)

    Pasquarella, Adrian; Deacon, Helene; Chen, Becky X.; Commissaire, Eva; Au-Yeung, Karen

    2014-01-01

    This study examined the within-language and cross-language relationships between orthographic processing and word reading in French and English across Grades 1 and 2. Seventy-three children in French Immersion completed measures of orthographic processing and word reading in French and English in Grade 1 and Grade 2, as well as a series of control…

  11. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    Science.gov (United States)

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural memory performance. These data provided strong support for the dual-coding model.

  12. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  13. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  14. Words and melody are intertwined in perception of sung words: EEG and behavioral evidence.

    Directory of Open Access Journals (Sweden)

    Reyna L Gordon

    Full Text Available Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.

  15. Category specific spatial dissociations of parallel processes underlying visual naming.

    Science.gov (United States)

    Conner, Christopher R; Chen, Gang; Pieters, Thomas A; Tandon, Nitin

    2014-10-01

    The constituent elements and dynamics of the networks responsible for word production are a central issue to understanding human language. Of particular interest is their dependency on lexical category, particularly the possible segregation of nouns and verbs into separate processing streams. We applied a novel mixed-effects, multilevel analysis to electrocorticographic data collected from 19 patients (1942 electrodes) to examine the activity of broadly disseminated cortical networks during the retrieval of distinct lexical categories. This approach was designed to overcome the issues of sparse sampling and individual variability inherent to invasive electrophysiology. Both noun and verb generation evoked overlapping, yet distinct nonhierarchical processes favoring ventral and dorsal visual streams, respectively. Notable differences in activity patterns were noted in Broca's area and superior lateral temporo-occipital regions (verb > noun) and in parahippocampal and fusiform cortices (noun > verb). Comparisons with functional magnetic resonance imaging (fMRI) results yielded a strong correlation of blood oxygen level-dependent signal and gamma power and an independent estimate of group size needed for fMRI studies of cognition. Our findings imply parallel, lexical category-specific processes and reconcile discrepancies between lesional and functional imaging studies. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Cross-modal processing in auditory and visual working memory.

    Science.gov (United States)

    Suchan, Boris; Linnewerth, Britta; Köster, Odo; Daum, Irene; Schmid, Gebhard

    2006-02-01

    This study aimed to further explore processing of auditory and visual stimuli in working memory. Smith and Jonides (1997) [Smith, E.E., Jonides, J., 1997. Working memory: A view from neuroimaging. Cogn. Psychol. 33, 5-42] described a modified working memory model in which visual input is automatically transformed into a phonological code. To study this process, auditory and the corresponding visual stimuli were presented in a variant of the 2-back task which involved changes from the auditory to the visual modality and vice versa. Brain activation patterns underlying visual and auditory processing as well as transformation mechanisms were analyzed. Results yielded a significant activation in the left primary auditory cortex associated with transformation of visual into auditory information which reflects the matching and recoding of a stored item and its modality. This finding yields empirical evidence for a transformation of visual input into a phonological code, with the auditory cortex as the neural correlate of the recoding process in working memory.

  17. A Survey on Sensor Coverage and Visual Data Capturing/Processing/Transmission in Wireless Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Florence G. H. Yap

    2014-02-01

    Full Text Available Wireless Visual Sensor Networks (WVSNs where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs that can only transmit scalar information (e.g., temperature, the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/ transmit visual data in limited resources (hardware capability and bandwidth WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/ processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs.

  18. An fMRI study of semantic processing in men with schizophrenia

    OpenAIRE

    Kubicki, M.; McCarley, R.W.; Nestor, P.G.; Huh, T.; Kikinis, R.; Shenton, M.E.; Wible, C.G.

    2003-01-01

    As a means toward understanding the neural bases of schizophrenic thought disturbance, we examined brain activation patterns in response to semantically and superficially encoded words in patients with schizophrenia. Nine male schizophrenic and 9 male control subjects were tested in a visual levels of processing (LOP) task first outside the magnet and then during the fMRI scanning procedures (using a different set of words). During the experiments visual words were presented under two conditi...

  19. Processing reafferent and exafferent visual information for action and perception.

    Science.gov (United States)

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  20. Creating visual explanations improves learning.

    Science.gov (United States)

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  1. Eye movements during the handwriting of words: Individually and within sentences.

    Science.gov (United States)

    Sita, Jodi C; Taylor, Katelyn A

    2015-10-01

    Handwriting, a complex motor process involves the coordination of both the upper limb and visual system. The gaze behavior that occurs during the handwriting process is an area that has been little studied. This study investigated the eye-movements of adults during writing and reading tasks. Eye and handwriting movements were recorded for six different words over three different tasks. The results compared reading and handwriting the same words, a between condition comparison and a comparison between the two handwriting tasks. Compared to reading, participants produced more fixations during handwriting tasks and the average fixation durations were longer. When reading fixations were found to be mostly around the center of word, whereas fixations when writing appear to be made for each letter in a written word and were located around the base of letters and flowed in a left to right direction. Between the two writing tasks more fixations were made when words were written individually compared to within sentences, yet fixation durations were no different. Correlation of the number of fixations made to kinematic variables revealed that horizontal size and road length held a strong correlation with the number of fixations made by participants. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. The role of the left Brodmann's areas 44 and 45 in reading words and pseudowords

    OpenAIRE

    Heim, S.; Alter, K.; Ischebeck, A.; Amunts, K.; Eickhoff, S.; Mohlberg, H.; Zilles, K.; von Cramon, D.; Friederici, A.

    2005-01-01

    In this functional magnetic resonance imaging (fMRI) study, we investigated the influence of two task (lexical decision, LDT; phonological decision, PDT) on activation in Broca's region (left Brodmann's areas [BA] 44 and 45) during the processing of visually presented words and pseudowords. Reaction times were longer for pseudowords than words in LDT but did not differ in PDT. By combining the fMRI data with cytoarchitectonic anatomical probability maps, we demonstrated that the left BA 44 an...

  3. Corticospinal excitability during the processing of handwritten and typed words and non-words.

    Science.gov (United States)

    Gordon, Chelsea L; Spivey, Michael J; Balasubramaniam, Ramesh

    2017-06-09

    A number of studies have suggested that perception of actions is accompanied by motor simulation of those actions. To further explore this proposal, we applied Transcranial magnetic stimulation (TMS) to the left primary motor cortex during the observation of handwritten and typed language stimuli, including words and non-word consonant clusters. We recorded motor-evoked potentials (MEPs) from the right first dorsal interosseous (FDI) muscle to measure cortico-spinal excitability during written text perception. We observed a facilitation in MEPs for handwritten stimuli, regardless of whether the stimuli were words or non-words, suggesting potential motor simulation during observation. We did not observe a similar facilitation for the typed stimuli, suggesting that motor simulation was not occurring during observation of typed text. By demonstrating potential simulation of written language text during observation, these findings add to a growing literature suggesting that the motor system plays a strong role in the perception of written language. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Visualizing the Verbal and Verbalizing the Visual.

    Science.gov (United States)

    Braden, Roberts A.

    This paper explores relationships of visual images to verbal elements, beginning with a discussion of visible language as represented by words printed on the page. The visual flexibility inherent in typography is discussed in terms of the appearance of the letters and the denotative and connotative meanings represented by type, typographical…

  5. Evolution of attention mechanisms for early visual processing

    Science.gov (United States)

    Müller, Thomas; Knoll, Alois

    2011-03-01

    Early visual processing as a method to speed up computations on visual input data has long been discussed in the computer vision community. The general target of a such approaches is to filter nonrelevant information from the costly higher-level visual processing algorithms. By insertion of this additional filter layer the overall approach can be speeded up without actually changing the visual processing methodology. Being inspired by the layered architecture of the human visual processing apparatus, several approaches for early visual processing have been recently proposed. Most promising in this field is the extraction of a saliency map to determine regions of current attention in the visual field. Such saliency can be computed in a bottom-up manner, i.e. the theory claims that static regions of attention emerge from a certain color footprint, and dynamic regions of attention emerge from connected blobs of textures moving in a uniform way in the visual field. Top-down saliency effects are either unconscious through inherent mechanisms like inhibition-of-return, i.e. within a period of time the attention level paid to a certain region automatically decreases if the properties of that region do not change, or volitional through cognitive feedback, e.g. if an object moves consistently in the visual field. These bottom-up and top-down saliency effects have been implemented and evaluated in a previous computer vision system for the project JAST. In this paper an extension applying evolutionary processes is proposed. The prior vision system utilized multiple threads to analyze the regions of attention delivered from the early processing mechanism. Here, in addition, multiple saliency units are used to produce these regions of attention. All of these saliency units have different parameter-sets. The idea is to let the population of saliency units create regions of attention, then evaluate the results with cognitive feedback and finally apply the genetic mechanism

  6. To Write or to Type? The Effects of Handwriting and Word-Processing on the Written Style of Examination Essays

    Science.gov (United States)

    Mogey, Nora; Hartley, James

    2013-01-01

    There is much debate about whether or not these days students should be able to word-process essay-type examinations as opposed to handwriting them, particularly when they are asked to word-process everything else. This study used word-processing software to examine the stylistic features of 13 examination essays written by hand and 24 by…

  7. The influence of context on word order processing - an fMRI study

    DEFF Research Database (Denmark)

    Kristensen, Line Burholt; Engberg-Pedersen, Elisabeth; Nielsen, Andreas Højlund

    2013-01-01

    In languages that have subject-before-object as their canonical word order, e.g. German, English and Danish, behavioral experiments have shown more processing difficulties for object-initial clauses (OCs) than for subject-initial clauses (SCs). For processing of OCs in such languages, neuroimagin...

  8. Attention affects visual perceptual processing near the hand.

    Science.gov (United States)

    Cosman, Joshua D; Vecera, Shaun P

    2010-09-01

    Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.

  9. Modeling the length effect: Specifying the relation with visual and phonological correlates of reading

    NARCIS (Netherlands)

    van den Boer, M.; de Jong, P.F.; Haentjens-van Meeteren, M.M.

    2013-01-01

    Beginning readers' reading latencies increase as words become longer. This length effect is believed to be a marker of a serial reading process. We examined the effects of visual and phonological skills on the length effect. Participants were 184 second-grade children who read 3- to 5-letter words

  10. Infants Track Word Forms in Early Word-Object Associations

    Science.gov (United States)

    Zamuner, Tania S.; Fais, Laurel; Werker, Janet F.

    2014-01-01

    A central component of language development is word learning. One characterization of this process is that language learners discover objects and then look for word forms to associate with these objects (Mcnamara, 1984; Smith, 2000). Another possibility is that word forms themselves are also important, such that once learned, hearing a familiar…

  11. Regressive Imagery in Creative Problem-Solving: Comparing Verbal Protocols of Expert and Novice Visual Artists and Computer Programmers

    Science.gov (United States)

    Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin

    2015-01-01

    We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…

  12. Why does picture naming take longer than word reading? The contribution of articulatory processes.

    Science.gov (United States)

    Riès, Stéphanie; Legou, Thierry; Burle, Borís; Alario, F-Xavier; Malfait, Nicole

    2012-10-01

    Since the 19th century, it has been known that response latencies are longer for naming pictures than for reading words aloud. While several interpretations have been proposed, a common general assumption is that this difference stems from cognitive word-selection processes and not from articulatory processes. Here we show that, contrary to this widely accepted view, articulatory processes are also affected by the task performed. To demonstrate this, we used a procedure that to our knowledge had never been used in research on language processing: response-latency fractionating. Along with vocal onsets, we recorded the electromyographic (EMG) activity of facial muscles while participants named pictures or read words aloud. On the basis of these measures, we were able to fractionate the verbal response latencies into two types of time intervals: premotor times (from stimulus presentation to EMG onset), mostly reflecting cognitive processes, and motor times (from EMG onset to vocal onset), related to motor execution processes. We showed that premotor and motor times are both longer in picture naming than in reading, although than in reading, although articulation is already initiated in the latter measure. Future studies based on this new approach should bring valuable clues for a better understanding of the relation between the cognitive and motor processes involved in speech production.

  13. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  14. Intermediate Levels of Visual Processing

    National Research Council Canada - National Science Library

    Nakayama, Ken

    1998-01-01

    ...) surface representation, here we have shown that there is an intermediate level of visual processing, between the analysis of the image and higher order representations related to specific objects; (2...

  15. Effects of orthographic consistency on eye movement behavior: German and English children and adults process the same words differently.

    Science.gov (United States)

    Rau, Anne K; Moll, Kristina; Snowling, Margaret J; Landerl, Karin

    2015-02-01

    The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Working memory components as predictors of children's mathematical word problem solving.

    Science.gov (United States)

    Zheng, Xinhua; Swanson, H Lee; Marcoulides, George A

    2011-12-01

    This study determined the working memory (WM) components (executive, phonological loop, and visual-spatial sketchpad) that best predicted mathematical word problem-solving accuracy of elementary school children in Grades 2, 3, and 4 (N=310). A battery of tests was administered to assess problem-solving accuracy, problem-solving processes, WM, reading, and math calculation. Structural equation modeling analyses indicated that (a) all three WM components significantly predicted problem-solving accuracy, (b) reading skills and calculation proficiency mediated the predictive effects of the central executive system and the phonological loop on solution accuracy, and (c) academic mediators failed to moderate the relationship between the visual-spatial sketchpad and solution accuracy. The results support the notion that all components of WM play a major role in predicting problem-solving accuracy, but basic skills acquired in specific academic domains (reading and math) can compensate for some of the influence of WM on children's mathematical word problem solving. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Optimal viewing position in vertically and horizontally presented Japanese words.

    Science.gov (United States)

    Kajii, N; Osaka, N

    2000-11-01

    In the present study, the optimal viewing position (OVP) phenomenon in Japanese Hiragana was investigated, with special reference to a comparison between the vertical and the horizontal meridians in the visual field. In the first experiment, word recognition scores were determined while the eyes were fixating predetermined locations in vertically and horizontally displayed words. Similar to what has been reported for Roman scripts, OVP curves, which were asymmetric with respect to the beginning of words, were observed in both conditions. However, this asymmetry was less pronounced for vertically than for horizontally displayed words. In the second experiment, the visibility of individual characters within strings was examined for the vertical and horizontal meridians. As for Roman characters, letter identification scores were better in the right than in the left visual field. However, identification scores did not differ between the upper and the lower sides of fixation along the vertical meridian. The results showed that the model proposed by Nazir, O'Regan, and Jacobs (1991) cannot entirely account for the OVP phenomenon. A model in which visual and lexical factors are combined is proposed instead.

  18. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  19. Word/sub-word lattices decomposition and combination for speech recognition

    OpenAIRE

    Le , Viet-Bac; Seng , Sopheap; Besacier , Laurent; Bigi , Brigitte

    2008-01-01

    International audience; This paper presents the benefit of using multiple lexical units in the post-processing stage of an ASR system. Since the use of sub-word units can reduce the high out-of-vocabulary rate and improve the lack of text resources in statistical language modeling, we propose several methods to decompose, normalize and combine word and sub-word lattices generated from different ASR systems. By using a sub-word information table, every word in a lattice can be decomposed into ...

  20. The effects of recall-concurrent visual-motor distraction on picture and word recall.

    Science.gov (United States)

    Warren, M W

    1977-05-01

    The dual-coding model (Paivio, 1971, 1975) predicts a larger imaginal component in the recall of pictures relative to words and a larger imaginal component in the recall of concrete words relative to abstract words. These predictions were tested by examining the effect of a recall-concurrent imagery-suppression task (pursuit-rotor tracking) on the recall of pictures vs picture labels and on the recall of concrete words vs abstract words. The results showed that recall-concurrent pursuit-rotor tracking interfered with picture recall, but not word recall (Experiments 1 and 2); however, there was no evidence of an effect of recall-concurrent tracking on the recall of concrete words (Experiment 3). The results suggested a revision of the dual-coding model.

  1. Does "a picture is worth 1000 words" apply to iconic Chinese words? Relationship of Chinese words and pictures.

    Science.gov (United States)

    Lo, Shih-Yu; Yeh, Su-Ling

    2018-05-29

    The meaning of a picture can be extracted rapidly, but the form-to-meaning relationship is less obvious for printed words. In contrast to English words that follow grapheme-to-phoneme correspondence rule, the iconic nature of Chinese words might predispose them to activate their semantic representations more directly from their orthographies. By using the paradigm of repetition blindness (RB) that taps into the early level of word processing, we examined whether Chinese words activate their semantic representations as directly as pictures do. RB refers to the failure to detect the second occurrence of an item when it is presented twice in temporal proximity. Previous studies showed RB for semantically related pictures, suggesting that pictures activate their semantic representations directly from their shapes and thus two semantically related pictures are represented as repeated. However, this does not apply to English words since no RB was found for English synonyms. In this study, we replicated the semantic RB effect for pictures, and further showed the absence of semantic RB for Chinese synonyms. Based on our findings, it is suggested that Chinese words are processed like English words, which do not activate their semantic representations as directly as pictures do.

  2. Non-intentional but not automatic: reduction of word- and arrow-based compatibility effects by sound distractors in the same categorical domain.

    Science.gov (United States)

    Miles, James D; Proctor, Robert W

    2009-10-01

    In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.

  3. The brain's dorsal route for speech represents word meaning: evidence from gesture.

    Science.gov (United States)

    Josse, Goulven; Joseph, Sabine; Bertasi, Eric; Giraud, Anne-Lise

    2012-01-01

    The dual-route model of speech processing includes a dorsal stream that maps auditory to motor features at the sublexical level rather than at the lexico-semantic level. However, the literature on gesture is an invitation to revise this model because it suggests that the premotor cortex of the dorsal route is a major site of lexico-semantic interaction. Here we investigated lexico-semantic mapping using word-gesture pairs that were either congruent or incongruent. Using fMRI-adaptation in 28 subjects, we found that temporo-parietal and premotor activity during auditory processing of single action words was modulated by the prior audiovisual context in which the words had been repeated. The BOLD signal was suppressed following repetition of the auditory word alone, and further suppressed following repetition of the word accompanied by a congruent gesture (e.g. ["grasp" + grasping gesture]). Conversely, repetition suppression was not observed when the same action word was accompanied by an incongruent gesture (e.g. ["grasp" + sprinkle]). We propose a simple model to explain these results: auditory and visual information converge onto premotor cortex where it is represented in a comparable format to determine (in)congruence between speech and gesture. This ability of the dorsal route to detect audiovisual semantic (in)congruence suggests that its function is not restricted to the sublexical level.

  4. Behavioral and Neural Representations of Spatial Directions across Words, Schemas, and Images.

    Science.gov (United States)

    Weisberg, Steven M; Marchette, Steven A; Chatterjee, Anjan

    2018-05-23

    Modern spatial navigation requires fluency with multiple representational formats, including visual scenes, signs, and words. These formats convey different information. Visual scenes are rich and specific but contain extraneous details. Arrows, as an example of signs, are schematic representations in which the extraneous details are eliminated, but analog spatial properties are preserved. Words eliminate all spatial information and convey spatial directions in a purely abstract form. How does the human brain compute spatial directions within and across these formats? To investigate this question, we conducted two experiments on men and women: a behavioral study that was preregistered and a neuroimaging study using multivoxel pattern analysis of fMRI data to uncover similarities and differences among representational formats. Participants in the behavioral study viewed spatial directions presented as images, schemas, or words (e.g., "left"), and responded to each trial, indicating whether the spatial direction was the same or different as the one viewed previously. They responded more quickly to schemas and words than images, despite the visual complexity of stimuli being matched. Participants in the fMRI study performed the same task but responded only to occasional catch trials. Spatial directions in images were decodable in the intraparietal sulcus bilaterally but were not in schemas and words. Spatial directions were also decodable between all three formats. These results suggest that intraparietal sulcus plays a role in calculating spatial directions in visual scenes, but this neural circuitry may be bypassed when the spatial directions are presented as schemas or words. SIGNIFICANCE STATEMENT Human navigators encounter spatial directions in various formats: words ("turn left"), schematic signs (an arrow showing a left turn), and visual scenes (a road turning left). The brain must transform these spatial directions into a plan for action. Here, we investigate

  5. The association between visual, nonverbal cognitive abilities and speech, phonological processing, vocabulary and reading outcomes in children with cochlear implants.

    Science.gov (United States)

    Edwards, Lindsey; Anderson, Sara

    2014-01-01

    The aim of this study was to explore the possibility that specific nonverbal, visual cognitive abilities may be associated with outcomes after pediatric cochlear implantation. The study therefore examined the relationship between visual sequential memory span and visual sequential reasoning ability, and a range of speech, phonological processing, vocabulary knowledge, and reading outcomes in children with cochlear implants. A cross-sectional, correlational design was used. Sixty-six children aged 5 to 12 years completed tests of visual memory span and visual sequential reasoning, along with tests of speech intelligibility, phonological processing, vocabulary knowledge, and word reading ability (the outcome variables). Auditory memory span was also assessed, and its relationship with the other variables examined. Significant, positive correlations were found between the visual memory and reasoning tests, and each of the outcome variables. A series of regression analyses then revealed that for all the outcome variables, after variance attributable to the age at implantation was accounted for, visual memory span and visual sequential reasoning ability together accounted for significantly more variance (up to 25%) in each outcome measure. These findings have both clinical and theoretical implications. Clinically, the findings may help improve the identification of children at risk of poor progress after implantation earlier than has been possible to date as the nonverbal tests can be administered to children as young as 2 years of age. The results may also contribute to the identification of children with specific learning or language difficulties as well as improve our ability to develop intervention strategies for individual children based on their specific cognitive processing strengths or difficulties. Theoretically, these results contribute to the growing body of knowledge about learning and development in deaf children with cochlear implants.

  6. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue

    2006-01-01

    of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p......BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. HYPOTHESIS: In contrast we hypothesize...... that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features...

  7. Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition

    Science.gov (United States)

    Yap, Melvin J.; Balota, David A.

    2007-01-01

    Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…

  8. Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.

    Science.gov (United States)

    Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro

    2011-12-01

    The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights

  9. Face processing is gated by visual spatial attention

    Directory of Open Access Journals (Sweden)

    Roy E Crist

    2008-03-01

    Full Text Available Human perception of faces is widely believed to rely on automatic processing by a domain-specifi c, modular component of the visual system. Scalp-recorded event-related potential (ERP recordings indicate that faces receive special stimulus processing at around 170 ms poststimulus onset, in that faces evoke an enhanced occipital negative wave, known as the N170, relative to the activity elicited by other visual objects. As predicted by modular accounts of face processing, this early face-specifi c N170 enhancement has been reported to be largely immune to the infl uence of endogenous processes such as task strategy or attention. However, most studies examining the infl uence of attention on face processing have focused on non-spatial attention, such as object-based attention, which tend to have longer-latency effects. In contrast, numerous studies have demonstrated that visual spatial attention can modulate the processing of visual stimuli as early as 80 ms poststimulus – substantially earlier than the N170. These temporal characteristics raise the question of whether this initial face-specifi c processing is immune to the infl uence of spatial attention. This question was addressed in a dual-visualstream ERP study in which the infl uence of spatial attention on the face-specifi c N170 could be directly examined. As expected, early visual sensory responses to all stimuli presented in an attended location were larger than responses evoked by those same stimuli when presented in an unattended location. More importantly, a signifi cant face-specifi c N170 effect was elicited by faces that appeared in an attended location, but not in an unattended one. In summary, early face-specifi c processing is not automatic, but rather, like other objects, strongly depends on endogenous factors such as the allocation of spatial attention. Moreover, these fi ndings underscore the extensive infl uence that top-down attention exercises over the processing of

  10. The picture superiority effect in categorization: visual or semantic?

    Science.gov (United States)

    Job, R; Rumiati, R; Lotto, L

    1992-09-01

    Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.

  11. Processing of Words and Faces by Patients with Left and Right Temporal Lobe Epilepsy

    Directory of Open Access Journals (Sweden)

    Andrew W. Ellis

    1991-01-01

    Full Text Available Tests of word and face processing were given to patients with complex partial epilepsy focussed on the left or right temporal lobe, and to non-epileptic control subjects. The left TLE group showed the greatest impairment on object naming and on reading tests, but the right TLE group also showed a lesser impairment relative to the normal control subjects on both tests. The right TLE group was selectively impaired on distinguishing famous from non-famous faces while the left TLE group was impaired at naming famous faces they had successfully recognized as familiar. There was no significant difference between the three groups on recognition memory for words. The implications of the results for theories of the role of the temporal lobes in word and face processing, and the possible neural mechanisms responsible for the deficits in TLE patients, are discussed.

  12. Numbers and functional lateralization: A visual half-field and dichotic listening study in proficient bilinguals.

    Science.gov (United States)

    Klichowski, Michal; Króliczak, Gregory

    2017-06-01

    Potential links between language and numbers and the laterality of symbolic number representations in the brain are still debated. Furthermore, reports on bilingual individuals indicate that the language-number interrelationships might be quite complex. Therefore, we carried out a visual half-field (VHF) and dichotic listening (DL) study with action words and different forms of symbolic numbers used as stimuli to test the laterality of word and number processing in single-, dual-language and mixed -task and language- contexts. Experiment 1 (VHF) showed a significant right visual field/left hemispheric advantage in response accuracy for action word, as compared to any form of symbolic number processing. Experiment 2 (DL) revealed a substantially reversed effect - a significant right ear/left hemisphere advantage for arithmetic operations as compared to action word processing, and in response times in single- and dual-language contexts for number vs. action words. All these effects were language independent. Notably, for within-task response accuracy compared across modalities significant differences were found in all studied contexts. Thus, our results go counter to findings showing that action-relevant concepts and words, as well as number words are represented/processed primarily in the left hemisphere. Instead, we found that in the auditory context, following substantial engagement of working memory (here: by arithmetic operations), there is a subsequent functional reorganization of processing single stimuli, whether verbs or numbers. This reorganization - their weakened laterality - at least for response accuracy is not exclusive to processing of numbers, but the number of items to be processed. For response times, except for unpredictable tasks in mixed contexts, the "number problem" is more apparent. These outcomes are highly relevant to difficulties that simultaneous translators encounter when dealing with lengthy auditory material in which single items such

  13. Whole-word frequency and inflectional paradigm size facilitate Estonian case-inflected noun processing.

    Science.gov (United States)

    Lõo, Kaidi; Järvikivi, Juhani; Baayen, R Harald

    2018-06-01

    Estonian is a morphologically rich Finno-Ugric language with nominal paradigms that have at least 28 different inflected forms but sometimes more than 40. For languages with rich inflection, it has been argued that whole-word frequency, as a diagnostic of whole-word representations, should not be predictive for lexical processing. We report a lexical decision experiment, showing that response latencies decrease both with frequency of the inflected form and its inflectional paradigm size. Inflectional paradigm size was also predictive of semantic categorization, indicating it is a semantic effect, similar to the morphological family size effect. These findings fit well with the evidence for frequency effects of word n-grams in languages with little inflectional morphology, such as English. Apparently, the amount of information on word use in the mental lexicon is substantially larger than was previously thought. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Visual perception of ADHD children with sensory processing disorder.

    Science.gov (United States)

    Jung, Hyerim; Woo, Young Jae; Kang, Je Wook; Choi, Yeon Woo; Kim, Kyeong Mi

    2014-04-01

    The aim of the present study was to investigate the visual perception difference between ADHD children with and without sensory processing disorder, and the relationship between sensory processing and visual perception of the children with ADHD. Participants were 47 outpatients, aged 6-8 years, diagnosed with ADHD. After excluding those who met exclusion criteria, 38 subjects were clustered into two groups, ADHD children with and without sensory processing disorder (SPD), using SSP reported by their parents, then subjects completed K-DTVP-2. Spearman correlation analysis was run to determine the relationship between sensory processing and visual perception, and Mann-Whitney-U test was conducted to compare the K-DTVP-2 score of two groups respectively. The ADHD children with SPD performed inferiorly to ADHD children without SPD in the on 3 quotients of K-DTVP-2. The GVP of K-DTVP-2 score was related to Movement Sensitivity section (r=0.368(*)) and Low Energy/Weak section of SSP (r=0.369*). The result of the present study suggests that among children with ADHD, the visual perception is lower in those children with co-morbid SPD. Also, visual perception may be related to sensory processing, especially in the reactions of vestibular and proprioceptive senses. Regarding academic performance, it is necessary to consider how sensory processing issues affect visual perception in children with ADHD.

  15. Effects of valence and arousal on emotional word processing are modulated by concreteness: Behavioral and ERP evidence from a lexical decision task.

    Science.gov (United States)

    Yao, Zhao; Yu, Deshui; Wang, Lili; Zhu, Xiangru; Guo, Jingjing; Wang, Zhenhong

    2016-12-01

    We investigated whether the effects of valence and arousal on emotional word processing are modulated by concreteness using event-related potentials (ERPs). The stimuli included concrete words (Experiment 1) and abstract words (Experiment 2) that were organized in an orthogonal design, with valence (positive and negative) and arousal (low and high) as factors in a lexical decision task. In Experiment 1, the impact of emotion on the effects of concrete words mainly resulted from the contribution of valence. Positive concrete words were processed more quickly than negative words and elicited a reduction of N400 (300-410ms) and enhancement of late positive complex (LPC; 450-750ms), whereas no differences in response times or ERPs were found between high and low levels of arousal. In Experiment 2, the interaction between valence and arousal influenced the impact of emotion on the effects of abstract words. Low-arousal positive words were associated with shorter response times and a reduction of LPC amplitudes compared with high-arousal positive words. Low-arousal negative words were processed more slowly and elicited a reduction of N170 (140-200ms) compared with high-arousal negative words. The present study indicates that word concreteness modulates the contributions of valence and arousal to the effects of emotion, and this modulation occurs during the early perceptual processing stage (N170) and late elaborate processing stage (LPC) for emotional words and at the end of all cognitive processes (i.e., reflected by response times). These findings support an embodied theory of semantic representation and help clarify prior inconsistent findings regarding the ways in which valance and arousal influence different stages of word processing, at least in a lexical decision task. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Word and Nonword Processing without Meaning Support in Korean-Speaking Children with and without Hyperlexia

    Science.gov (United States)

    Lee, Sung Hee; Hwang, Mina

    2015-01-01

    Hyperlexia is a syndrome of reading without meaning in individuals who otherwise have pronounced cognitive and language deficits. The present study investigated the quality of word representation and the effects of deficient semantic processing on word and nonword reading of Korean children with hyperlexia; their performances were compared to…

  17. Visual processing in rapid-chase systems: Image processing, attention, and awareness

    Directory of Open Access Journals (Sweden)

    Thomas eSchmidt

    2011-07-01

    Full Text Available Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed towards target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. 1 When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria. 2 Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. 3 Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. 4 When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that "fast" visuomotor measures predominantly driven by feedforward processing should supplement "slow" psychophysical measures predominantly based on visual

  18. Is processing of symbols and words influenced by writing system? Evidence from Chinese, Korean, English, and Greek.

    Science.gov (United States)

    Altani, Angeliki; Georgiou, George K; Deng, Ciping; Cho, Jeung-Ryeul; Katopodi, Katerina; Wei, Wei; Protopapas, Athanassios

    2017-12-01

    We examined cross-linguistic effects in the relationship between serial and discrete versions of digit naming and word reading. In total, 113 Mandarin-speaking Chinese children, 100 Korean children, 112 English-speaking Canadian children, and 108 Greek children in Grade 3 were administered tasks of serial and discrete naming of words and digits. Interrelations among tasks indicated that the link between rapid naming and reading is largely determined by the format of the tasks across orthographies. Multigroup path analyses with discrete and serial word reading as dependent variables revealed commonalities as well as significant differences between writing systems. The path coefficient from discrete digits to discrete words was greater for the more transparent orthographies, consistent with more efficient sight-word processing. The effect of discrete word reading on serial word reading was stronger in alphabetic languages, where there was also a suppressive effect of discrete digit naming. However, the effect of serial digit naming on serial word reading did not differ among the four language groups. This pattern of relationships challenges a universal account of reading fluency acquisition while upholding a universal role of rapid serial naming, further distinguishing between multi-element interword and intraword processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Audio-Visual Speech in Noise Perception in Dyslexia

    Science.gov (United States)

    van Laarhoven, Thijs; Keetels, Mirjam; Schakel, Lemmy; Vroomen, Jean

    2018-01-01

    Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a…

  20. The Word Superiority Effect in central and peripheral vision

    DEFF Research Database (Denmark)

    Sand, Katrine; Habekost, Thomas; Petersen, Anders

    2016-01-01

    , which prevents lexical analysis of a word in the periphery. We conclude that perception of words and letters differs according to location in the visual field. Linking our results to previous studies of crowding effects in patients with reading impairments, we hypothesize that similar mechanisms may...