Habekost, Thomas; Petersen, Anders; Behrmann, Marlene
impaired in letter naming and word processing, and performance with letters and words was dissociated in all four patients, with word reading being more severely impaired than letter recognition. This suggests that the word reading deficit in pure alexia may not be reduced to an impairment in single letter...
Rabovsky, Milena; Sommer, Werner; Abdel Rahman, Rasha
Recent evidence suggests that conceptual knowledge modulates early visual stages of object recognition. The present study investigated whether similar modulations can be observed also for the recognition of object names, that is, for symbolic representations with only arbitrary relationships between their visual features and the corresponding conceptual knowledge. In a learning paradigm, we manipulated the amount of information provided about initially unfamiliar visual objects while controlling for perceptual stimulus properties and exposure. In a subsequent test session with electroencephalographic recordings, participants performed several tasks on either the objects or their written names. For objects as well as names, knowledge effects were observed as early as about 120 msec in the P1 component of the ERP, reflecting perceptual processing in extrastriate visual cortex. These knowledge-dependent modulations of early stages of visual word recognition suggest that information about word meanings may modulate the perception of arbitrarily related visual features surprisingly early.
Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M.
Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to…
Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli
The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…
Bernard Christian; Petit Laurent; Simon Gregory; Rebaï Mohamed
Abstract Background Occipito-temporal N170 component represents the first step where face, object and word processing are discriminated along the ventral stream of the brain. N170 leftward asymmetry observed during reading has been often associated to prelexical orthographic visual word form activation. However, some studies reported a lexical frequency effect for this component particularly during word repetition that appears in contradiction with this prelexical orthographic step. Here, we ...
Winskel, Heather; Perea, Manuel
Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. In order to investigate the contribution of tone at the orthographic/phonological level during the early stages of word processing in Thai, we conducted a masked priming experiment-using both lexical decision and word naming tasks. For a given target word (e.g., ห้อง/hᴐ:ŋ2/, room), five priming conditions were created: (a) identity (e.g., ห้อง/hᴐ:ŋ2/), (b) same initial consonant, but with a different tone marker (e.g., ห่อง/hᴐ:ŋ1/), (c) different initial consonant, but with the same tone marker (e.g., ศ้อง/sᴐ:ŋ2/), (d) orthographic control (different initial consonant, different tone marker; e.g., ศ่อง/sᴐ:ŋ1/), and (e) same tone homophony, but with a different initial consonant and different tone marker (e.g., ธ่อง/t(h)ᴐ:ŋ2/). Results of the critical comparisons revealed that segmental information (i.e., consonantal information) appears to be more important than tone information (i.e., tone marker) in the early stages of visual-word processing in alphabetic, tonal languages like Thai. Thus, these findings may help constrain models of visual-word recognition and reading in tonal languages.
Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
Siéroff, Eric; Slama, Yael
Word processing in left (LVF) and right (RVF) visual fields may be affected by left hemisphere activation during reading and by script direction. We evaluated the effect of script direction by presenting words in left-to-right (French) and right-to-left (Hebrew) scripts to bilingual French participants. Words of different lengths were presented in the LVF and the RVF in a naming task. Results showed (1) a stronger word length effect in the LVF than in the RVF in French, and no difference of word length effect between LVF and RVF in Hebrew; (2) a first-letter advantage only in the LVF in French and in the RVF in Hebrew, showing an effect of script direction on letter processing; and (3) a stronger advantage of external over internal letters in words presented in the LVF than in the RVF for both languages, showing a left hemisphere influence on letter activation. Thus, script direction and left hemisphere activation may affect different processes when reading words in LVF and RVF. Selective attention may orient and redistribute a processing "window" over the letter string according to script direction, and the modulation of attentional resources is influenced by left hemisphere activation.
Steven R Holloway
Full Text Available This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read.
van Schie, Hein T.; Wijers, Albertus A.; Mars, Rogier B.; Benjamins, Jeroen S.; Stowe, Laurie A.
Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that
Schie, H.T. van; Wijers, A.A.; Mars, R.B.; Benjamins, J.S.; Stowe, L.A.
Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that
Full Text Available Previous bilingual studies showed reduced hemispheric asymmetry in visual tasks such as face perception in bilinguals compared with monolinguals, suggesting experience in reading one or two languages could be a modulating factor. Here we examined whether difference in hemispheric asymmetry in visual tasks can also be observed in bilinguals who have different language backgrounds. We compared the behavior of three language groups in a tachistoscopic English word sequential matching task: English monolinguals (or alphabetic monolinguals, A-Ms, bilinguals with an alphabetic-L1 and English-L2 (alphabetic-alphabetic bilinguals, AA-Bs, and bilinguals with Chinese-L1 and English-L2 (logographic-alphabetic bilinguals, LA-Bs. The results showed that AA-Bs had a stronger right visual field/ left hemispheric (LH advantage than A-Ms and LA-Bs, suggesting that different language learning experiences can influence how visual words are processed in the brain. In addition, we showed that this effect could be accounted for by a computational model that implements a theory of hemispheric asymmetry in perception (i.e., the Double Filtering by Frequency theory; Ivry & Robertson, 1998; the modeling data suggested that this difference may be due to both the difference in participants' vocabulary size and the difference in word-to-sound mapping between alphabetic and logographic languages.
Schindler, Sebastian; Wegrzyn, Martin; Steppacher, Inga; Kissler, Johanna
The personal significance of a language statement depends on its communicative context. However, this is rarely taken into account in neuroscience studies. Here, we investigate how the implied source of single word statements alters their cortical processing. Participants' brain event-related potentials were recorded in response to identical word streams consisting of positive, negative, and neutral trait adjectives stated to either represent personal trait feedback from a human or to be randomly generated by a computer. Results showed a strong impact of perceived sender. Regardless of content, the notion of receiving feedback from a human enhanced all components, starting with the P2 and encompassing early posterior negativity (EPN), P3, and the late positive potential (LPP). Moreover, negative feedback by the "human sender" elicited a larger EPN, whereas positive feedback generally induced a larger LPP. Source estimations revealed differences between "senders" in visual areas, particularly the bilateral fusiform gyri. Likewise, emotional content enhanced activity in these areas. These results specify how even implied sender identity changes the processing of single words in seemingly realistic communicative settings, amplifying their processing in the visual brain. This suggests that the concept of motivated attention extends from stimulus significance to simultaneous appraisal of contextual relevance. Finally, consistent with distinct stages of emotional processing, at least in contexts perceived as social, humans are initially alerted to negative content, but later process what is perceived as positive feedback more intensely. Copyright © 2015 the authors 0270-6474/15/356010-10$15.00/0.
Lamy, Dominique; Mudrik, Liad; Deouell, Leon Y
Whether information perceived without awareness can affect overt performance, and whether such effects can cross sensory modalities, remains a matter of debate. Whereas influence of unconscious visual information on auditory perception has been documented, the reverse influence has not been reported. In addition, previous reports of unconscious cross-modal priming relied on procedures in which contamination of conscious processes could not be ruled out. We present the first report of unconscious cross-modal priming when the unaware prime is auditory and the test stimulus is visual. We used the process-dissociation procedure [Debner, J. A., & Jacoby, L. L. (1994). Unconscious perception: Attention, awareness and control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 304-317] which allowed us to assess the separate contributions of conscious and unconscious perception of a degraded prime (either seen or heard) to performance on a visual fragment-completion task. Unconscious cross-modal priming (auditory prime, visual fragment) was significant and of a magnitude similar to that of unconscious within-modality priming (visual prime, visual fragment). We conclude that cross-modal integration, at least between visual and auditory information, is more symmetrical than previously shown, and does not require conscious mediation.
Get up to speed on the newest version of Word with visual instruction Microsoft Word is the standard for word processing programs, and the newest version offers additional functionality you'll want to use. Get up to speed quickly and easily with the step-by-step instructions and full-color screen shots in this popular guide! You'll see how to perform dozens of tasks, including how to set up and format documents and text; work with diagrams, charts, and pictures; use Mail Merge; post documents online; and much more. Easy-to-follow, two-page lessons make learning a snap.Full-
Zhang, Dandan; He, Weiqi; Wang, Ting; Luo, Wenbo; Zhu, Xiangru; Gu, Ruolei; Li, Hong; Luo, Yue-Jia
Rapid responses to emotional words play a crucial role in social communication. This study employed event-related potentials to examine the time course of neural dynamics involved in emotional word processing. Participants performed a dual-target task in which positive, negative and neutral adjectives were rapidly presented. The early occipital P1 was found larger when elicited by negative words, indicating that the first stage of emotional word processing mainly differentiates between non-threatening and potentially threatening information. The N170 and the early posterior negativity were larger for positive and negative words, reflecting the emotional/non-emotional discrimination stage of word processing. The late positive component not only distinguished emotional words from neutral words, but also differentiated between positive and negative words. This represents the third stage of emotional word processing, the emotion separation. Present results indicated that, similar with the three-stage model of facial expression processing; the neural processing of emotional words can also be divided into three stages. These findings prompt us to believe that the nature of emotion can be analyzed by the brain independent of stimulus type, and that the three-stage scheme may be a common model for emotional information processing in the context of limited attentional resources. © The Author (2014). Published by Oxford University Press. For Permissions, please email: email@example.com.
Carreiras, Manuel; Vergara, Marta; Barber, Horacio
A number of behavioral studies have suggested that syllables might play an important role in visual word recognition in some languages. We report two event-related potential (ERP) experiments using a new paradigm showing that syllabic units modulate early ERP components. In Experiment 1, words and pseudowords were presented visually and colored so that there was a match or a mismatch between the syllable boundaries and the color boundaries. The results showed color-syllable congruency effects in the time window of the P200. Lexicality modulated the N400 amplitude, but no effects of this variable were obtained at the P200 window. In Experiment 2, high- and low-frequency words and pseudowords were presented in the congruent and incongruent conditions. The results again showed congruency effects at the P200 for low-frequency words and pseudowords, but not for high-frequency words. Lexicality and lexical frequency effects showed up at the N400 component. The results suggest a dissociation between syllabic and lexical effects with important consequences for models of visual word recognition.
Amenta, Simona; Crepaldi, Davide
The last 40 years have witnessed a growing interest in the mechanisms underlying the visual identification of complex words. A large amount of experimental data has been amassed, but although a growing number of studies are proposing explicit theoretical models for their data, no comprehensive theory has gained substantial agreement among scholars in the field. We believe that this is due, at least in part, to the presence of several controversial pieces of evidence in the literature and, consequently, to the lack of a well-defined set of experimental facts that any theory should be able to explain. With this review, we aim to delineate the state of the art in the research on the visual identification of complex words. By reviewing major empirical evidences in a number of different paradigms such as lexical decision, word naming, and masked and unmasked priming, we were able to identify a series of effects that we judge as reliable or that were consistently replicated in different experiments, along with some more controversial data, which we have tried to resolve and explain. We concentrated on behavioral and electrophysiological studies on inflected, derived, and compound words, so as to span over all types of complex words. The outcome of this work is an analytical summary of well-established facts on the most relevant morphological issues, such as regularity, morpheme position coding, family size, semantic transparency, morpheme frequency, suffix allomorphy, and productivity, morphological entropy, and morpho-orthographic parsing. In discussing this set of benchmark effects, we have drawn some methodological considerations on why contrasting evidence might have emerged, and have tried to delineate a target list for the construction of a new all-inclusive model of the visual identification of morphologically complex words. PMID:22807919
Liu, Chao; Zhang, Wu-Tian; Tang, Yi-Yuan; Mai, Xiao-Qin; Chen, Hsuan-Chih; Tardif, Twila; Luo, Yue-Jia
A notable controversy in neurolinguistics is whether there is a particular brain area specialized for visual word recognition within the visual ventral stream. We investigated this question via implicit processing of Chinese characters. Implicit processing of four types of stimuli--real characters, pseudo characters, artificial characters, and checkerboard--in two different sizes, were compared in 14 normal participants using functional MRI (fMRI) with a size judgment task. The results showed that when the three character types were contrasted to one another, there was significantly greater activation in the left middle fusiform gyrus during real and pseudo character processing compared to artificial characters. Moreover, individual analysis revealed that the coordinates were consistent with the Visual Word Form Area (VWFA) reported for alphabetic scripts. Results also showed a consistent activation in the left middle frontal gyrus (BA 9) for real and pseudo characters. The relation between this region and the VWFA in Characters processing still needs further investigation.
Kievit-Kylar, Brent; Jones, Michael N
Although many recent advances have taken place in corpus-based tools, the techniques used to guide exploration and evaluation of these systems have advanced little. Typically, the plausibility of a semantic space is explored by sampling the nearest neighbors to a target word and evaluating the neighborhood on the basis of the modeler's intuition. Tools for visualization of these large-scale similarity spaces are nearly nonexistent. We present a new open-source tool to plot and visualize semantic spaces, thereby allowing researchers to rapidly explore patterns in visual data that describe the statistical relations between words. Words are visualized as nodes, and word similarities are shown as directed edges of varying strengths. The "Word-2-Word" visualization environment allows for easy manipulation of graph data to test word similarity measures on their own or in comparisons between multiple similarity metrics. The system contains a large library of statistical relationship models, along with an interface to teach them from various language sources. The modularity of the visualization environment allows for quick insertion of new similarity measures so as to compare new corpus-based metrics against the current state of the art. The software is available at www.indiana.edu/~semantic/word2word/.
Cavina-Pratesi, Cristiana; Large, Mary-Ellen; Milner, A David
Patient D.F. has a profound and enduring visual form agnosia due to a carbon monoxide poisoning episode suffered in 1988. Her inability to distinguish simple geometric shapes or single alphanumeric characters can be attributed to a bilateral loss of cortical area LO, a loss that has been well established through structural and functional fMRI. Yet despite this severe perceptual deficit, D.F. is able to "guess" remarkably well the identity of whole words. This paradoxical finding, which we were able to replicate more than 20 years following her initial testing, raises the question as to whether D.F. has retained specialized brain circuitry for word recognition that is able to function to some degree without the benefit of inputs from area LO. We used fMRI to investigate this, and found regions in the left fusiform gyrus, left inferior frontal gyrus, and left middle temporal cortex that responded selectively to words. A group of healthy control subjects showed similar activations. The left fusiform activations appear to coincide with the area commonly named the visual word form area (VWFA) in studies of healthy individuals, and appear to be quite separate from the fusiform face area (FFA). We hypothesize that there is a route to this area that lies outside area LO, and which remains relatively unscathed in D.F. Copyright © 2014 Elsevier Ltd. All rights reserved.
Roya Ranjbar Mohammadi
Full Text Available Studies on visual word recognition have resulted in different and sometimes contradictory proposals as Multi-Trace Memory Model (MTM, Dual-Route Cascaded Model (DRC, and Parallel Distribution Processing Model (PDP. The role of the number of syllables in word recognition was examined by the use of five groups of English words and non-words. The reaction time of the participants to these words was measured using reaction time measuring software. The results indicated that there was syllabic effect on recognition of both high and low frequency words. The pattern was incremental in terms of syllable number. This pattern prevailed in high and low frequency words and non-words except in one syllable words. In general, the results are in line with the PDP model which claims that a single processing mechanism is used in both words and non-words recognition. In other words, the findings suggest that lexical items are mainly processed via a lexical route. A pedagogical implication of the findings would be that reading in English as a foreign language involves analytical processing of the syllable of the words.
Lawrence Gregory Appelbaum
Full Text Available The decoding of visually presented line segments into letters, and letters into words, is critical to fluent reading abilities. Here we investigate the temporal dynamics of visual orthographic processes, focusing specifically on right hemisphere contributions and interactions between the hemispheres involved in the implicit processing of visually presented words, consonants, false fonts, and symbolic strings. High-density EEG was recorded while participants detected infrequent, simple, perceptual targets (dot strings embedded amongst a of character strings. Beginning at 130ms, orthographic and non-orthographic stimuli were distinguished by a sequence of ERP effects over occipital recording sites. These early latency occipital effects were dominated by enhanced right-sided negative-polarity activation for non-orthographic stimuli that peaked at around 180ms. This right-sided effect was followed by bilateral positive occipital activity for false-fonts, but not symbol strings. Moreover the size of components of this later positive occipital wave was inversely correlated with the right-sided ROcc180 wave, suggesting that subjects who had larger early right-sided activation for non-orthographic stimuli had less need for more extended bilateral (e.g. interhemispheric processing of those stimuli shortly later. Additional early (130-150ms negative-polarity activity over left occipital cortex and longer-latency centrally distributed responses (>300ms were present, likely reflecting implicit activation of the previously reported ‘visual-word-form’ area and N400-related responses, respectively. Collectively, these results provide a close look at some relatively unexplored portions of the temporal flow of information processing in the brain related to the implicit processing of potentially linguistic information and provide valuable information about the interactions between hemispheres supporting visual orthographic processing.
Rubino, Cristina; Corrow, Sherryse L; Corrow, Jeffrey C; Duchaine, Brad; Barton, Jason J S
The "many-to-many" hypothesis proposes that visual object processing is supported by distributed circuits that overlap for different object categories. For faces and words the hypothesis posits that both posterior fusiform regions contribute to both face and visual word perception and predicts that unilateral lesions impairing one will affect the other. However, studies testing this hypothesis have produced mixed results. We evaluated visual word processing in subjects with developmental prosopagnosia, a condition linked to right posterior fusiform abnormalities. Ten developmental prosopagnosic subjects performed a word-length effect task and a task evaluating the recognition of word content across variations in text style, and the recognition of style across variations in word content. All subjects had normal word-length effects. One had prolonged sorting time for word recognition in handwritten stimuli. These results suggest that the deficit in developmental prosopagnosia is unlikely to affect visual word processing, contrary to predictions of the many-to-many hypothesis.
Ma, Bosen; Wang, Xiaoyun; Li, Degao
To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.
Mishra, Ramesh Kumar; Singh, Niharika
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
Callens, Maaike; Whitney, Carol; Tops, Wim; Brysbaert, Marc
Whitney and Cornelissen hypothesized that dyslexia may be the result of problems with the left-to-right processing of words, particularly in the part of the word between the word beginning and the reader's fixation position. To test this hypothesis, we tachistoscopically presented consonant trigrams
Qiao, Fuqiang; Zheng, Li; Li, Lin; Zhu, Lei; Wang, Qianfeng
Reduced neural activation have been consistently observed during repeated items processing, a phenomenon termed repetition suppression. The present study used functional magnetic resonance imaging (fMRI) to investigate whether and how stimuli of emotional valence affects repetition suppression by adopting Chinese personality-trait words as materials. Seventeen participants were required to read the negative and neutral Chinese personality-trait words silently. And then they were presented with repeated and novel items during scanning. Results showed significant repetition suppression in the inferior occipital gyrus only for neutral personality-trait words, whereas similar repetition suppression in the left inferior temporal gyrus and left middle temporal gyrus was revealed for both the word types. These results indicated common and distinct neural substrates during processing Chinese repeated negative and neutral personality-trait words. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Pozzan, Lucia; Trueswell, John C
We asked whether children's well-known difficulties revising initial sentence processing commitments characterize the immature or the learning parser. Adult L2 speakers of English acted out temporarily ambiguous and unambiguous instructions. While online processing patterns indicate that L2 adults experienced garden-paths and were sensitive to referential information to a similar degree as native adults, their act-out patterns indicate increased difficulties revising initial interpretations, at rates similar to those observed for 5-year-old native children (e.g., Trueswell, Sekerina, Hill & Logrip, 1999). We propose that L2 learners' difficulties with revision stem from increased recruitment of cognitive control networks during processing of a not fully proficient language, resulting in the reduced availability of cognitive control for parsing revisions.
Get your blog up and running with the latest version of WordPress WordPress is one of the most popular, easy-to-use blogging platforms and allows you to create a dynamic and engaging blog, even if you have no programming skills or experience. Ideal for the visual learner, Teach Yourself VISUALLY WordPress, Second Edition introduces you to the exciting possibilities of the newest version of WordPress and helps you get started, step by step, with creating and setting up a WordPress site. Author and experienced WordPress user Janet Majure shares advice, insight, and best practices for taking full
Luce, P A; Lyons, E A
A large number of multisyllabic words contain syllables that are themselves words. Previous research using cross-modal priming and word-spotting tasks suggests that embedded words may be activated when the carrier word is heard. To determine the effects of an embedded word on processing of the larger word, processing times for matched pairs of bisyllabic words were examined to contrast the effects of the presence or absence of embedded words in both 1st- and 2nd-syllable positions. Results from auditory lexical decision and single-word shadowing demonstrate that the presence of an embedded word in the 1st-syllable position speeds processing times for the carrier word. The presence of an embedded word in the 2nd syllable has no demonstrable effect.
Holmes, V. M.
Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…
Hills, Charlotte S; Pancaroglu, Raika; Duchaine, Brad; Barton, Jason J S
A novel hypothesis of object recognition asserts that multiple regions are engaged in processing an object type, and that cerebral regions participate in processing multiple types of objects. In particular, for high-level expert processing, it proposes shared rather than dedicated resources for word and face perception, and predicts that prosopagnosic subjects would have minor deficits in visual word processing, and alexic subjects would have subtle impairments in face perception. In this study, we evaluated whether prosopagnosic subjects had deficits in processing either the word content or the style of visual text. Eleven prosopagnosic subjects, 6 with unilateral right lesions and 5 with bilateral lesions, participated. In the first study, we evaluated their word length effect in reading single words. In the second study, we assessed their time and accuracy for sorting text by word content independent of style, and for sorting text by handwriting or font style independent of word content. Only subjects with bilateral lesions showed mildly elevated word length effects. Subjects were not slowed in sorting text by word content, but were nearly uniformly impaired in accuracy for sorting text by style. Our results show that prosopagnosic subjects are impaired not only in face recognition but also in perceiving stylistic aspects of text. This supports a modified version of the many-to-many hypothesis that incorporates hemispheric specialization for processing different aspects of visual text. © 2015 American Neurological Association.
Becker, Curtis A.
Schuberth and Eimas (EJ 159 939) reported that context and frequency effects added to determine reaction times in a lexical decision (word v nonword) task. The present reexamination shows that context and frequency do interact, with semantic context facilitating the processing of low-frequency words more than high-frequency words. (Author/CP)
Borowsky, Ron; Besner, Derek
D. C. Plaut and J. R. Booth presented a parallel distributed processing model that purports to simulate human lexical decision performance. This model (and D. C. Plaut, 1995) offers a single mechanism account of the pattern of factor effects on reaction time (RT) between semantic priming, word frequency, and stimulus quality without requiring a…
Cao, Fan; Rickles, Ben; Vu, Marianne; Zhu, Ziheng; Chan, Derek Ho Lung; Harris, Lindsay N; Stafura, Joseph; Xu, Yi; Perfetti, Charles A
Adult learners of Chinese learned new characters through writing, visual chunking or reading-only. Following training, ERPs were recorded during character recognition tasks, first shortly after the training and then three months later. We hypothesized that the character training effects would be seen in ERP components associated with word recognition and episodic memory. Results confirmed a larger N170 for visual chunking training than other training and a larger P600 for learned characters than novel characters. Another result was a training effect on the amplitude of the P100, which was greater following writing training than other training, suggesting that writing training temporarily lead to increased visual attention to the orthographic forms. Furthermore, P100 amplitude at the first post-test was positively correlated with character recall 3 months later. Thus the marker of early visual attention (P100) was predictive of retention of orthographic knowledge acquired in training.
Sachiko eKinoshita; Dennis eNorris
A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal...
Full Text Available Visual crowding-the inability to see an object when it is surrounded by flankers in the periphery-does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1 and Event-Related Potential (Experiment 2 and 3 measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination integration-the simplest kind of temporal semantic integration-did not occur in visual crowding (Experiment 4. Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.
Ferrand, Ludovic; New, Boris
Two experiments investigated the role of the number of syllables in visual word recognition and naming. Experiment 1 (word and nonword naming) showed that effects of number of syllables on naming latencies were observed for nonwords and very low-frequency words but not for high-frequency words. In Experiment 2 (lexical decision), syllabic length effects were also obtained for very low-frequency words but not for high-frequency words and nonwords. These results suggest that visual word recognition and naming do require syllabic decomposition, at least for very low-frequency words in French. These data are compatible with the multiple-trace memory model for polysyllabic word reading [Psychol. Rev. 105 (1998) 678]. In this model, reading depends on the activity of two procedures: (1) a global procedure that operates in parallel across a letter string (and does not generate a strong syllabic length effect) and that is the predominant process in generating responses to high-frequency words, and (2) an analytic procedure that operates serially across a letter string (and generates a strong syllabic length effect) and that is the predominant process in generating responses to very low-frequency words. A modified version of the dual route cascaded model [Psychol. Rev. 108 (1) (2001) 204] can also explain the present results, provided that syllabic units are included in this model. However, the Parallel Distributed Processing model [Psychol. Rev. 96 (1989) 523; J. Exp. Psychol.: Human Perception Perform. 16 (1990) 92] has difficulties to account for these results.
Take your WordPress skills to the next level with these tips, tricks, and tasks Congratulations on getting your blog up and running with WordPress! Now are you ready to take it to the next level? Teach Yourself VISUALLY Complete WordPress takes you beyond the blogging basics with expanded tips, tricks, and techniques with clear, step-by-step instructions accompanied by screen shots. This visual book shows you how to incorporate forums, use RSS, obtain and review analytics, work with tools like Google AdSense, and much more.Shows you how to use mobile tools to edit a
Sereno, Sara C.; Scott, Graham G.; Bo eYao; Thaden, Elske J.; Patrick J. O'Donnell
Visual emotion word processing has been in the focus of recent psycholinguistic research. In general, emotion words provoke differential responses in comparison to neutral words. However, words are typically processed within a context rather than in isolation. For instance, how does one’s inner emotional state influence the comprehension of emotion words? To address this question, the current study examined lexical decision responses to emotionally positive, negative, and neutral words as...
Sereno, Sara C.; Scott, Graham G.; Yao, Bo; Thaden, Elske J.; O'Donnell, Patrick J.
Visual emotion word processing has been in the focus of recent psycholinguistic research. In general, emotion words provoke differential responses in comparison to neutral words. However, words are typically processed within a context rather than in isolation. For instance, how does one's inner emotional state influence the comprehension of emotion words? To address this question, the current study examined lexical decision responses to emotionally positive, negative, and neutral words as a f...
Mei, Leilei; Xue, Gui; Chen, Chuansheng; Xue, Feng; Zhang, Mingxia; Dong, Qi
Previous studies have identified the critical role of the left fusiform cortex in visual word form processing, learning, and memory. However, this so-called visual word form area's (VWFA) other functions are not clear. In this study, we used fMRI and the subsequent memory paradigm to examine whether the putative VWFA was involved in the processing and successful memory encoding of faces as well as words. Twenty-two native Chinese speakers were recruited to memorize the visual forms of faces and Chinese words. Episodic memory for the studied material was tested 3h after the scan with a recognition test. The fusiform face area (FFA) and the VWFA were functionally defined using separate localizer tasks. We found that, both within and across subjects, stronger activity in the VWFA was associated with better recognition memory of both words and faces. Furthermore, activation in the VWFA did not differ significantly during the encoding of faces and words. Our results revealed the important role of the so-called VWFA in face processing and memory and supported the view that the left mid-fusiform cortex plays a general role in the successful processing and memory of different types of visual objects (i.e., not limited to visual word forms). Copyright 2010 Elsevier Inc. All rights reserved.
Kinoshita, Sachiko; Norris, Dennis
A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316
Full Text Available Developing readers have been shown to rely on morphemes in visual word recognition across several naming, lexical decision and priming experiments. However, the impact of morphology in reading is not consistent across studies with differing results emerging not only between but also within writing systems. Here, we report a cross-language experiment involving the English and French languages, which aims to compare directly the impact of morphology in word recognition in the two languages. Monolingual French-speaking and English-speaking children matched for grade level (Part 1 and for age (Part 2 participated in the study. Two lexical decision tasks (one in French, one in English featured words and pseudowords with exactly the same structure in each language. The presence of a root (R+ and a suffix ending (S+ was manipulated orthogonally, leading to four possible combinations in words (R+S+: e.g. postal; R+S-: e.g. turnip; R-S+: e.g. rascal; and R-S-: e.g. bishop and in pseudowords (R+S+: e.g. pondal; R+S-: e.g. curlip; R-S+: e.g. vosnal; and R-S-: e.g. hethop. Results indicate that the presence of morphemes facilitates children’s recognition of words and impedes their ability to reject pseudowords in both languages. Nevertheless, effects extend across accuracy and latencies in French but are restricted to accuracy in English, suggesting a higher degree of morphological processing efficiency in French. We argue that the inconsistencies found between languages emphasise the need for developmental models of word recognition to integrate a morpheme level whose elaboration is tuned by the productivity and transparency of the derivational system.
Full Text Available A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and nonwords do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris & Kinoshita, 2008, and how the task dissociations can be used to understand the early processes in lexical access.
Full Text Available People with dyslexia have difficulty learning to read and many lack fluent word recognition as adults. In a novel task that borrows elements of the 'word superiority' and 'word inversion' paradigms, we investigate whether holistic word recognition is impaired in dyslexia. In Experiment 1 students with dyslexia and controls judged the similarity of pairs of 6- and 7-letter words or pairs of words whose letters had been partially jumbled. The stimuli were presented in both upright and inverted form with orthographic regularity and orientation randomized from trial to trial. While both groups showed sensitivity to orthographic regularity, both word inversion and letter jumbling were more detrimental to skilled than dyslexic readers supporting the idea that the latter may read in a more analytic fashion. Experiment 2 employed the same task but using shorter, 4- and 5-letter words and a design where orthographic regularity and stimuli orientation was held constant within experimental blocks to encourage the use of either holistic or analytic processing. While there was no difference in reaction time between the dyslexic and control groups for inverted stimuli, the students with dyslexia were significantly slower than controls for upright stimuli. These findings suggest that holistic word recognition, which is largely based on the detection of orthographic regularity, is impaired in dyslexia.
Vergara-Martínez, Marta; Perea, Manuel; Marín, Alejandro; Carreiras, Manuel
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in a lexical decision task. The stimuli were displayed under different conditions in a masked priming paradigm with a 50-ms SOA: (i) identity/baseline condition e.g., chocolate-CHOCOLATE); (ii) vowels-delayed condition (e.g., choc_l_te-CHOCOLATE); (iii) consonants-delayed condition (cho_o_ate-CHOCOLATE); (iv) consonants-transposed condition (cholocate-CHOCOLATE); (v) vowels-transposed condition (chocalote-CHOCOLATE), and (vi) unrelated condition (editorial-CHOCOLATE). Results showed earlier ERP effects and longer reaction times for the delayed-letter compared to the transposed-letter conditions. Furthermore, at early stages of processing, consonants may play a greater role during letter identity processing. Differences between vowels and consonants regarding letter position assignment are discussed in terms of a later phonological level involved in lexical retrieval. Copyright © 2010 Elsevier Inc. All rights reserved.
Miller, Paul; Kupfermann, Amirit
The aim of the study was to elucidate the nature and efficiency of the strategies that readers with phonological dyslexia use for temporary retention of written words in Working Memory (WM). Data was gathered through a paradigm whereby participants had to identify serially presented written (target) words from within larger word pools according to their presentation order, with word pools containing code-specific distracter (CSD) words and non-code-specific distracter (NCSD) words. Analyses focused on three aspects of performance: (1) false recognition of target words; (2) correct recognition of target words; and (3) retention of word presentation order. Participants were readers with diagnosed phonological dyslexia (n = 20, mean grade level = 9.05 [0.89]) and a control group of regular readers (n = 25, mean grade level = 9.00 [0.76]). Results provide direct evidence that the dyslexic readers and the regular readers used essentially different memory coding strategies for the temporary retention of written words, with the former predominantly relying on a visual strategy and the latter on a phonological strategy. Findings further pinpointed a notably impoverished ability of the dyslexic readers to retain word presentation order. The implication of these findings is discussed in relation to theories predicting the acquisition and mastery of reading.
Orenes, Isabel; Santamaría, Carlos
Many studies have shown the advantage of processing visualizable words over non-visualizables due to the associated image code. The present paper reports the case of negation in which imagery could slow down processing. Negation reverses the truth value of a proposition from false to true or vice versa. Consequently, negation works only on propositions (reversing their truth value) and cannot apply directly to other forms of knowledge representation such as images (although they can be veridical or not). This leads to a paradoxical hypothesis: despite the advantage of visualizable words for general processing, the negation of clauses containing words related to the representation of an image would be more difficult than negation containing non-visualizable words. Two experiments support this hypothesis by showing that sentences with a previously negated visualizable word took longer to be read than sentences with previously negated non-visualizable words. The results suggest that a verbal code is used to process negation. Copyright © 2014 Elsevier B.V. All rights reserved.
Solomyak, Olla; Marantz, Alec
We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which does not appear elsewhere). Analysis was focused on brain responses within 100-200 msec poststimulus onset in the previously identified letter string and visual word-form areas. MEG data were analyzed using cortically constrained minimum-norm estimation. Correlations were computed between activity at functionally defined ROIs and continuous measures of the words' morphological properties. ROIs were identified across subjects on a reference brain and then morphed back onto each individual subject's brain (n = 9). We find evidence of decomposition for both free stems and bound roots at the M170 stage in processing. The M170 response is shown to be sensitive to morphological properties such as affix frequency and the conditional probability of encountering each word given its stem. These morphological properties are contrasted with orthographic form features (letter string frequency, transition probability from one string to the next), which exert effects on earlier stages in processing ( approximately 130 msec). We find that effects of decomposition at the M170 can, in fact, be attributed to morphological properties of complex words, rather than to purely orthographic and form-related properties. Our data support a model of word recognition in which decomposition is attempted, and possibly utilized, for complex words containing bound roots as well as free word-stems.
Although it is assumed that semantics is a critical component of visual word recognition, there is still much that we do not understand. One recent way of studying semantic processing has been in terms of semantic neighbourhood (SN) density, and this research has shown that semantic neighbours facilitate lexical decisions. However, it is not clear…
Full Text Available This study investigates word-learning using a new model that integrates three processes: a extracting a word out of a continuous sound sequence, b inferring its referential meanings in context, c mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-syllable utterance containing a one-syllable target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random one-syllable words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the intentional value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.
Eskenazi, Michael A; Folk, Jocelyn R
We investigated whether high-skill readers skip more words than low-skill readers as a result of parafoveal processing differences based on reading skill. We manipulated foveal load and word length, two variables that strongly influence word skipping, and measured reading skill using the Nelson-Denny Reading Test. We found that reading skill did not influence the probability of skipping five-letter words, but low-skill readers were less likely to skip three-letter words when foveal load was high. Thus, reading skill is likely to influence word skipping when the amount of information in the parafovea falls within the word identification span. We interpret the data in the context of visual-based (extended optimal viewing position model) and linguistic based (E-Z Reader model) accounts of word skipping. The models make different predictions about how and why a word and skipped; however, the data indicate that both models should take into account the fact that different factors influence skipping rates for high- and low-skill readers. (c) 2015 APA, all rights reserved).
Boettcher, Sage E P; Wolfe, Jeremy M
In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order
Are you a visual learner? Do you prefer instructions that show you how to do something -- and skip the long-winded explanations? If so, then this book is for you. Open it up and you'll find clear, step-by-step screen shots that show you how to tackle more than 125 Word 2003 tasks. Each task-based spread includes these great features to get you up and running on Word 2003 in no time:* Helpful sidebars that offer practical tips and tricks* Succinct explanations that walk you through step by step* Full-color screen shots that demonstrate each task* Two-page lessons that break big topics into bite
Seim, Sandra K.; Stoneking, Cheryl A.
In February 1980, Rush-Presbyterian-St. Lukes Medical Center in Chicago appointed a task force to study word processing/office automation and to make recommendations for acquisition, implementation, and administration. The group's working approach, findings, and conclusions are discussed. (Author/MLW)
Starrfelt, Randi; Habekost, Thomas; Gerlach, Christian
Whether pure alexia is a selective disorder that affects reading only, or if it reflects a more general visual disturbance, is highly debated. We have investigated the selectivity of visual deficits in a pure alexic patient (NN) using a combination of psychophysical measures, mathematical modelling...... affected. His visual apprehension span was markedly reduced for letters and digits. His reduced visual processing capacity was also evident when reporting letters from words. In an object decision task with fragmented pictures, NN's performance was abnormal. Thus, even in a pure alexic patient with intact...... recognition of line drawings, we find evidence of a general visual deficit not selective to letters or words. This finding is important because it raises the possibility that other pure alexics might have similar non-selective impairments when tested thoroughly. We argue that the general visual deficit in NN...
Watcharapinchai, Nattachai; Aramvith, Supavadee; Siddhichai, Supakorn
An improvement in the method of automatic vehicle classification is investigated. The challenges are to correctly classify vehicles regardless of changes in illumination, differences in points of view of the camera, and variations in the types of vehicles. Our proposed appearance-based feature extraction algorithm is called linked visual words (LVWs) and is based on the existing technique bag-of-visual word (BoVW) with the addition of spatial information to improve accuracy of classification. In addition, to prevent over-fitting due to a large number of LVWs, four common sampling techniques with LVWs are investigated. Our results suggest that the sampling of LVWs using TF-IDF with grouping improved the accuracy of classification for the test dataset. In summary, the proposed system is able to classify nine types of vehicles and work with surveillance cameras in real-world scenarios. The classification accuracy of the proposed system is 5.58% and 4.27% higher on average for three datasets when compared with BoVW + SVM and Lenet-5, respectively.
This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.
Full Text Available Studies are reviewed that demonstrate how the foveal area of the eye constrains how compound words are identified during reading. When compound words are short, their letters can be identified during a single fixation, leading to the whole-word route dominating word recognition from early on. Hence, visually marking morpheme boundaries by hyphens slows down processing by encouraging morphological decomposition when holistic processing is a feasible option. In contrast, the decomposition route dominates the early stages of identifying long compound words. Thus, visual marking of morpheme boundaries facilitates processing of long compound words, unless the initial fixation made on the word lands very close to the morpheme boundary. The reviewed pattern of results is explained by the visual acuity principle (Bertram & Hyönä, 2003 and the dual-route framework of morphological processing.
Maria Grazia eDi Bono
Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.
Shillcock, Richard C.; McDonald, Scott; Hipwell, Peter; Lowe, Will
We review various dimensions along which words differ and which, sometimes as part of a word recognition model, have been claimed to predict performance in the visual lexical decision task. Models of word recognition have typically involved inadequate, or non-existent, semantic representations and have dealt with words existing in isolation from any context. We propose an alternative perspective in which it is the relationships between words - reflecting usage and meaning - rather than the di...
Kuchinke, Lars; Lux, Vanessa
A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.
Full Text Available A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.
Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G.; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency. PMID:27688597
Beaumont, Lee R.
Examines the two worlds of word processing: a theoretical world found in textbooks and magazines, and a "real" world found in offices where some form of word processing has been introduced. Suggestions for business teachers are included. (CT)
Full Text Available The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization, followed by a four-picture array (a target plus three distractors (part 2: matching visualization. Children were to select the picture matching the word they heard in part 1. Event-Related Potentials (ERPs locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e. < 300 ms was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e. 300-699 ms and late (i.e. 700-1000 ms ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a post-anterior pathway sequence: occipital, parietal and temporal areas; conversely, matching visualization involved left-hemispheric activity following an ant-posterior pathway sequence: frontal, temporal, parietal and occipital areas. These results suggest that, similarly for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying
Vervloed, Mathijs P. J.; Loijens, Nancy E. A.; Waller, Sarah E.
In the report presented here, the authors describe a pilot intervention study that was intended to teach children with visual impairments the meaning of far-away words, and that used their mothers as mediators. The aim was to teach both labels and deep word knowledge, which is the comprehension of the full meaning of words, illustrated through…
Roll, Mikael; Horne, Merle; Lindgren, Magnus
Results indicating that high stem tones realizing word accents activate a certain class of suffixes in online processing of Central Swedish are presented. This supports the view that high Swedish word accent tones are induced onto word stems by particular suffixes rather than being associated with words in the mental lexicon. Using event-related potentials, effects of mismatch between word accents and inflectional suffixes were compared with mismatches between stem and suffix in terms of declension class. Declensionally incorrect suffixes yielded an increase in the N400, indicating problems in lexical retrieval, as well as a P600 effect, showing reanalysis. Both declensionally correct and incorrect high tone-inducing (Accent 2) suffixes combined with a mismatching low tone (Accent 1) on the stems produced P600 effects, but did not increase the N400. Suffixes usually co-occurring with Accent 1 did not yield any effects in words realized with the nonmatching Accent 2, suggesting that Accent 1 is a default accent, lacking association with any particular suffix. High tones on Accent 2 words also produced an early anterior positivity, interpreted as a P200 effect reflecting preattentive processing of the tone. (c) 2010 Elsevier B.V. All rights reserved.
Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the fronto-temporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the fronto-temporal language network and participates in high-level language processing. Congenitally blind (n=10) and sighted control (n=15), male and female participants each took part in two fMRI experiments: 1) word reading (Braille for blind and print for sighted participants), and 2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex.SIGNIFICANCE STATEMENTThe visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream
Rey, Amandine E; Riou, Benoit; Vallet, Guillaume T; Versace, Rémy
How do we represent the meaning of words? The present study assesses whether access to conceptual knowledge requires the reenactment of the sensory components of a concept. The reenactment-that is, simulation-was tested in a word categorisation task using an innovative masking paradigm. We hypothesised that a meaningless reactivated visual mask should interfere with the simulation of the visual dimension of concrete words. This assumption was tested in a paradigm in which participants were not aware of the link between the visual mask and the words to be processed. In the first phase, participants created a tone-visual mask or tone-control stimulus association. In the test phase, they categorised words that were presented with 1 of the tones. Results showed that words were processed more slowly when they were presented with the reactivated mask. This interference effect was only correlated with and explained by the value of the visual perceptual strength of the words (i.e., our experience with the visual dimensions associated with concepts) and not with other characteristics. We interpret these findings in terms of word access, which may involve the simulation of sensory features associated with the concept, even if participants were not explicitly required to access visual properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Full Text Available When reading, proficient bilinguals seem to engage the same cognitive circuits regardless of the language in use. Yet, whether or not such ‘bilingual’ mechanisms would be lateralized in the same way in distinct – single or dual – language contexts is a question for debate. To fill this gap, we tested 18 highly proficient Polish (L1 – English (L2 childhood bilinguals whose task was to read aloud one of the two laterally presented action verbs, one stimulus per visual half field. While in the single-language blocks only L1 or L2 words were shown, in the subsequent mixed-language blocks words from both languages were concurrently displayed. All stimuli were presented for 217 ms followed by masks in which letters were replaced with hash marks. Since in non-simultaneous bilinguals the control of language, skilled actions (including reading, and representations of action concepts are typically left lateralized, the vast majority of our participants showed the expected, significant right visual field advantage for L1 and L2, both for accuracy and response times. The observed effects were nevertheless associated with substantial variability in the strength of the lateralization of the mechanisms involved. Moreover, although it could be predicted that participants’ performance should be better in a single-language context, accuracy was significantly higher and response times were significantly shorter in a dual-language context, irrespective of the language tested. Finally, for both accuracy and response times, there were significant positive correlations between the laterality indices (LIs of both languages independent of the context, with a significantly greater left-sided advantage for L1 vs. L2 in the mixed-language blocks, based on LIs calculated for response times. Thus, despite similar representations of the two languages in the bilingual brain, these results also point to the functional separation of L1 and L2 in the dual
Rouibah, A; Taft, M
Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.
Kazanas, Stephanie A.; Altarriba, Jeanette
Previous studies comparing emotion and emotion-laden word processing have used various cognitive tasks, including an Affective Simon Task (Altarriba and Basnight-Brown in "Int J Billing" 15(3):310-328, 2011), lexical decision task (LDT; Kazanas and Altarriba in "Am J Psychol", in press), and rapid serial visual processing…
Starrfelt, Randi; Habekost, Thomas; Gerlach, Christian
Whether pure alexia is a selective disorder that affects reading only, or if it reflects a more general visual disturbance, is highly debated. We have investigated the selectivity of visual deficits in a pure alexic patient (NN) using a combination of psychophysical measures, mathematical modelling...... affected. His visual apprehension span was markedly reduced for letters and digits. His reduced visual processing capacity was also evident when reporting letters from words. In an object decision task with fragmented pictures, NN's performance was abnormal. Thus, even in a pure alexic patient with intact...... can be accounted for in terms of inefficient build-up of sensory representations, and that this low level deficit can explain the pattern of spared and impaired abilities in this patient....
Lázaro, Miguel; Sainz, Javier; Illera, Víctor
In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…
Johnson, E.K.; McQueen, J.M.; Hüttig, F.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched
Fraga González, G.; Žarić, G.; Tijms, J.; Bonte, M.; Blomert, L.; van der Molen, M.W.
The specialization of visual brain areas for fast processing of printed words plays an important role in the acquisition of reading skills. Dysregulation of these areas may be among the deficits underlying developmental dyslexia. The present study examines the specificity of word activation in
Rosa Kit Wan Kwok
Full Text Available We investigated word learning in university and college students with a diagnosis of dyslexia and in typically-reading controls. Participants read aloud short (4-letter and longer (7-letter nonwords as quickly as possible. The nonwords were repeated across 10 blocks, using a different random order in each block. Participants returned 7 days later and repeated the experiment. Accuracy was high in both groups. The dyslexics were substantially slower than the controls at reading the nonwords throughout the experiment. They also showed a larger length effect, indicating less effective decoding skills. Learning was demonstrated by faster reading of the nonwords across repeated presentations and by a reduction in the difference in reading speeds between shorter and longer nonwords. The dyslexics required more presentations of the nonwords before the length effect became non-significant, only showing convergence in reaction times between shorter and longer items in the second testing session where controls achieved convergence part-way through the first session. Participants also completed a psychological test battery assessing reading and spelling, vocabulary, phonological awareness, working memory, nonverbal ability and motor speed. The dyslexics performed at a similar level to the controls on nonverbal ability but significantly less well on all the other measures. Regression analyses found that decoding ability, measured as the speed of reading aloud nonwords when they were presented for the first time, was predicted by a composite of word reading and spelling scores (‘literacy’. Word learning was assessed in terms of the improvement in naming speeds over 10 blocks of training. Learning was predicted by vocabulary and working memory scores, but not by literacy, phonological awareness, nonverbal ability or motor speed. The results show that young dyslexic adults have problems both in pronouncing novel words and in learning new written words.
Thomas, Ruth Fleming; Nagel, C. Van
A creative visualization approach to spelling and word recognition has been tried successfully with both adults and children. Unlike the traditional phonic approach to spelling, which is a left brain, analytical approach, the creative visualization approach uses the right brain. In addition, the approach eliminates the unpleasant associations with…
Sara C. Sereno
Full Text Available Visual emotion word processing has been in the focus of recent psycholinguistic research. In general, emotion words provoke differential responses in comparison to neutral words. However, words are typically processed within a context rather than in isolation. For instance, how does one’s inner emotional state influence the comprehension of emotion words? To address this question, the current study examined lexical decision responses to emotionally positive, negative, and neutral words as a function of induced mood as well as their word frequency. Mood was manipulated by exposing participants to different types of music. Participants were randomly assigned to one of three conditions – no music, positive music, and negative music. Participants’ moods were assessed during the experiment to confirm the mood induction manipulation. Reaction time results confirmed prior demonstrations of an interaction between a word’s emotionality and its frequency. Results also showed a significant interaction between participant mood and word emotionality. However, the pattern of results was not consistent with mood-congruency effects. Although positive and negative mood facilitated responses overall in comparison to the control group, neither positive nor negative mood appeared to additionally facilitate responses to mood-congruent words. Instead, the pattern of findings seemed to be the consequence of attentional
Eskenazi, Michael A.; Folk, Jocelyn R.
We investigated whether high-skill readers skip more words than low-skill readers as a result of parafoveal processing differences based on reading skill. We manipulated foveal load and word length, two variables that strongly influence word skipping, and measured reading skill using the Nelson-Denny Reading Test. We found that reading skill did…
Miller, Paul; Kupfermann, Amirit
The aim of the study was to elucidate the nature and efficiency of the strategies that readers with phonological dyslexia use for temporary retention of written words in Working Memory (WM). Data was gathered through a paradigm whereby participants had to identify serially presented written (target) words from within larger word pools according to…
Kember, H.; Choi, J.Y.; Cutler, A.; Barnes, J.; Brugos, A.; Shattuck-Hufnagel, S.; Veilleux, N.
In Korean, focus is expressed in accentual phrasing. To ascertain whether words focused in this manner enjoy a processing advantage analogous to that conferred by focus as expressed in, e.g, English and Dutch, we devised sentences with target words in one of four conditions: prosodic focus,
Vales, Catarina; Smith, Linda B.
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…
Francis, Wendy S; Camacho, Alejandra; Lara, Carolina
Previous research with words read in context at encoding showed little if any long-term repetition priming. In Experiment 1, 96 Spanish-English bilinguals translated words in isolation or in sentence contexts at encoding. At test, they translated words or named pictures corresponding to words produced at encoding and control words not previously presented. Repetition priming was reliable in all conditions, but priming effects were generally smaller for contextualized than for isolated words. Repetition priming in picture naming indicated priming from production in context. A componential analysis indicated priming from comprehension in context, but only in the less fluent language. Experiment 2 was a replication of Experiment 1 with auditory presentation of the words and sentences to be translated. Repetition priming was reliable in all conditions, but priming effects were again smaller for contextualized than for isolated words. Priming in picture naming indicated priming from production in context, but the componential analysis indicated no detectable priming for auditory comprehension. The results of the two experiments taken together suggest that repetition priming reflects the long-term learning that occurs with comprehension and production exposures to words in the context of natural language.
Temereanca, Simona; Hämäläinen, Matti S.; Kuperberg, Gina; Stufflebeam, Steve M.; Halgren, Eric; Brown, Emery N.
Active reading requires coordination between frequent eye-movements (saccades) and short fixations in text. Yet, the impact of saccades on word processing remains unknown, as neuroimaging studies typically employ constant eye fixation. Here we investigate eye-movement effects on word recognition processes in healthy human subjects using anatomically-constrained magnetoencephalography, psychophysical measurements, and saccade detection in real-time. Word recognition was slower and brain responses were reduced to words presented early vs. late after saccades, suggesting an overall transient impairment of word processing after eye-movements. Response reductions occurred early in visual cortices and later in language regions, where they co-localized with repetition priming effects. Qualitatively similar effects occurred when words appeared early vs. late after background-movement that mimicked saccades, suggesting that retinal motion contributes to postsaccadic inhibition. Further, differences in postsaccadic and background-movement effects suggest that central mechanisms also contribute to postsaccadic modulation. Together, these results suggest a complex interplay between visual and central saccadic mechanisms during reading. PMID:22457496
shared message-switching networks, communicating word processing equipment, teleconferencing, text and data handling, image services, and voice mail...makes use of elec- trically charged droplets of ink, whose sensitivity to the electric field towards which they are fired causes the image of the...D-A132 764 OFFICE AUTOMATION: A LOOK BEYOND WORD PROCESSING(U) 112 NAVAL POSTGRADUATE SCHOOL MONTEREY CA M E DUBOIS UNCLASSIFIED F/G 15/ 5 L
Sereno, Sara C; Scott, Graham G; Yao, Bo; Thaden, Elske J; O'Donnell, Patrick J
Visual emotion word processing has been in the focus of recent psycholinguistic research. In general, emotion words provoke differential responses in comparison to neutral words. However, words are typically processed within a context rather than in isolation. For instance, how does one's inner emotional state influence the comprehension of emotion words? To address this question, the current study examined lexical decision responses to emotionally positive, negative, and neutral words as a function of induced mood as well as their word frequency. Mood was manipulated by exposing participants to different types of music. Participants were randomly assigned to one of three conditions-no music, positive music, and negative music. Participants' moods were assessed during the experiment to confirm the mood induction manipulation. Reaction time results confirmed prior demonstrations of an interaction between a word's emotionality and its frequency. Results also showed a significant interaction between participant mood and word emotionality. However, the pattern of results was not consistent with mood-congruency effects. Although positive and negative mood facilitated responses overall in comparison to the control group, neither positive nor negative mood appeared to additionally facilitate responses to mood-congruent words. Instead, the pattern of findings seemed to be the consequence of attentional effects arising from induced mood. Positive mood broadens attention to a global level, eliminating the category distinction of positive-negative valence but leaving the high-low arousal dimension intact. In contrast, negative mood narrows attention to a local level, enhancing within-category distinctions, in particular, for negative words, resulting in less effective facilitation.
Vales, Catarina; Smith, Linda B.
Do words cue children’s visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target...
Cohen Kadosh, Roi; Henik, Avishai; Rubinsten, Orly
Besner and Coltheart [Besner, D., & Coltheart, M. (1979). Ideographic and alphabetic processing in skilled reading of English. Neuropsychologia, 17, 467-472] found a size congruity effect for Arabic numbers but not for number words. They proposed that Arabic numbers and number words are processed in different ways. However, in their study orientation of the stimuli and notation were confounded. In the present study, it is found that orientation of number words affects numerical processing. Orientation modulates both the size congruity effect and the distance effect; horizontal presentation produces similar results to those produced by Arabic numbers whereas vertical orientation produces different results. Accordingly, it is proposed that our cognitive system is endowed with two different mechanisms for numerical processing; one relies on a visual-spatial code and the other on a verbal code.
Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan
Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.
Perea, Manuel; Panadero, Victoria
The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.
Wallentin, Mikkel; Gravholt, Claus Højbjerg; Skakkebæk, Anne
Competing theories attempt to explain the function of Broca's area in single word processing. Studies have found the region to be more active during processing of pseudo words than real words and during infrequent words relative to frequent words and during Stroop (incongruent) color words compared to Non-Stroop (congruent) words. Two related theories explain these findings as reflecting either "cognitive control" processing in the face of conflicting input or a linguistic prediction error signal, based on a predictive coding approach. The latter implies that processing cost refers to violations of expectations based on the statistical distributions of input. In this fMRI experiment we attempted to disentangle single word processing cost originating from cognitive conflict and that stemming from predictive expectation violation. Participants (N = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent vs congruent colors). One of the colors, however, was presented three times as often as the other, making it possible to study both congruency and frequency effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study frequency effects across modalities. We found significant behavioral effects of both incongruency and frequency. A significant effect (p Broca's region, but no effect of frequency was observed and no interaction. Conjoined effects of incongruency and frequency were found in parietal regions as well as in the Visual Word Form Area (VWFA). No interaction between perceptual modality and frequency was found in VWFA suggesting that the region is not strictly visual. These findings speak against a strong version of the prediction error processing hypothesis in Broca's region. They support the idea that prediction error processes in the intermediate timeframe are allocated to more posterior parts of the brain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Alieh Kord Zaferanlu Kambuziya
Full Text Available Abstract This research at making a comparison between phonological processes in complex and compound Persian words. Data are gathered from a 40,000-word Persian dictionary. To catch some results, 4,034 complex words and 1,464 compound ones are chosen. To count the data, "excel" software is used. Some results of the research are: 1- "Insertion" is the usual phonological process in complex words. More than half of different insertions belongs to the consonant /g/. Then /y/ and /ï¿/ are in the second and the third order. The consonant /v/ has the least percentage of all. The most percentage of vowel insertion belongs to /e/. The vowels /a/ and /o/ are in the second and third order. Deletion in complex words can only be seen in consonant /t/ and vowel /e/. 2- The most frequent phonological processes in compounds is consonant deletion. In this process, seven different consonants including /t/, /ï¿/, /m/, /r/, / Ç°/, /d, and /c/. The only deleted vowel is /e/. In both groups of complex and compound, /t/ deletion can be observed. A sequence of three consonants paves the way for the deletion of one of the consonants, if one of the sequences is a sonorant one like /n/, the deletion process rarely happens. 3- In complex words, consonant deletion causes a lighter syllable weight, whereas vowel deletion causes a heavier syllable weight. So, both of the processes lead to bi-moraic weight. 4- The production of bi-moraic syllable in Persian is preferable to Syllable Contact Law. So, Specific Rules have precedence to Universals. 5- Vowel insertion can be seen in both groups of complex and compound words. In complex words, /e/ insertion has the most fundamental part. The vowels /a/ and /o/ are in the second and third place. Whenever there are two sequences of ultra-heavy syllables. By vowel insertion, the first syllable is broken into two light syllables. The compounds that are influenced by vowel insertion, can be and are pronounced without any
Saur, Dorothee; Baumgaertner, Annette; Moehring, Anja; Buchel, Christian; Bonnesen, Matthias; Rose, Michael; Musso, Mariachristina; Meisel, Jurgen M.
One of the issues debated in the field of bilingualism is the question of a "critical period" for second language acquisition. Recent studies suggest an influence of age of onset of acquisition (AOA) particularly on syntactic processing; however, the processing of word order in a sentence context has not yet been examined specifically. We used…
Venker, Courtney E
Deficits in visual disengagement are one of the earliest emerging differences in infants who are later diagnosed with autism spectrum disorder. Although researchers have speculated that deficits in visual disengagement could have negative effects on the development of children with autism spectrum disorder, we do not know which skills are disrupted or how this disruption takes place. As a first step in understanding this issue, this study investigated the relationship between visual disengagement and a critical skill in early language development: spoken word recognition. Participants were 18 children with autism spectrum disorder (aged 4-7 years). Consistent with our predictions, children with poorer visual disengagement were slower and less accurate to process familiar words; disengagement explained over half of the variance in spoken word recognition. Visual disengagement remained uniquely associated with spoken word recognition after accounting for children's vocabulary size and age. These findings align with a recently proposed developmental model in which poor visual disengagement decreases the speed and accuracy of real-time spoken word recognition in children with autism spectrum disorder-which, in turn, may negatively affect their language development.
Cognitive research has shown that the human brain processes images quicker than it processes words, and images are more likely than text to remain in long-term memory. With the expansion of technology that allows people from all walks of life to create and share photographs with a few clicks, the world seems to value visual media more than ever…
Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T
Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.
Duyck, Wouter; Vanderelst, Dieter; Desmet, Timothy; Hartsuiker, Robert J
A lexical decision experiment with Dutch-English bilinguals compared the effect of word frequency on visual word recognition in the first language with that in the second language. Bilinguals showed a considerably larger frequency effect in their second language, even though corpus frequency was matched across languages. Experiment 2 tested monolingual, native speakers of English on the English materials from Experiment 1. This yielded a frequency effect comparable to that of the bilinguals in Dutch (their L1). These results constrain the way in which existing models of word recognition can be extended to unbalanced bilingualism. In particular, the results are compatible with a theory by which the frequency effect originates from implicit learning. They are also compatible with models that attribute frequency effects to serial search in frequency-ordered bins (Murray & Forster, 2004), if these models are extended with the assumption that scanning speed is language dependent, or that bins are not language specific.
Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Brozek, J M
This article reviews additions to 3 ways of visually enriching verbal accounts of the history of psychology: illustrated books, slides, and videos. Although each approach has its limitations and its merits, taken together they constitute a significant addition to the printed word. As such, they broaden the toolkits of both the learners and the teachers of the history of psychology. Reference is also made to 3 earlier publications.
Stevens, W Dale; Kravitz, Dwight J; Peng, Cynthia S; Tessler, Michael Henry; Martin, Alex
The visual word form area (VWFA) is a region in the left occipitotemporal sulcus of literate individuals that is purportedly specialized for visual word recognition. However, there is considerable controversy about its functional specificity and connectivity, with some arguing that it serves as a domain-general, rather than word-specific, visual processor. The VWFA is a critical region for testing hypotheses about the nature of cortical organization, because it is known to develop only through experience (i.e., reading acquisition), and widespread literacy is too recent to have influenced genetic determinants of brain organization. Using a combination of advanced fMRI analysis techniques, including individual functional localization, multivoxel pattern analysis, and high-resolution resting-state functional connectivity (RSFC) analyses, with data from 33 healthy adult human participants, we demonstrate that (1) the VWFA can discriminate words from nonword letter strings (pseudowords); (2) the VWFA has preferential RSFC with Wernicke's area and other core regions of the language system; and (3) the strength of the RSFC between the VWFA and Wernicke's area predicts performance on a semantic classification task with words but not other categories of visual stimuli. Our results are consistent with the hypothesis that the VWFA is specialized for lexical processing of real words because of its functional connectivity with Wernicke's area. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is critical for determining the nature of category-related organization of the ventral visual system. However, its functional specificity and connectivity are fiercely debated. Recent work concluded that the VWFA is a domain-general, rather than word-specific, visual processor with no preferential functional connectivity with the language system. Using more advanced techniques, our results stand in stark contrast to these earlier findings. We demonstrate that the VWFA is highly
Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann
Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.
Yiu, Loretta K; Pitts, Michael A; Canseco-Gonzalez, Enriqueta
Previous research examining the time course of lexical access during word recognition suggests that phonological processing precedes access to semantic information, which in turn precedes access to syntactic information. Bilingual word recognition likely requires an additional level: knowledge of which language a specific word belongs to. Using the recording of event-related potentials, we investigated the time course of access to language membership information relative to semantic (Experiment 1) and syntactic (Experiment 2) encoding during visual word recognition. In Experiment 1, Spanish-English bilinguals viewed a series of printed words while making dual-choice go/nogo and left/right hand decisions based on semantic (whether the word referred to an animal or an object) and language membership information (whether the word was in English or in Spanish). Experiment 2 used a similar paradigm but with syntactic information (whether the word was a noun or a verb) as one of the response contingencies. The onset and peak latency of the N200, a component related to response inhibition, indicated that language information is accessed earlier than semantic information. Similarly, language information was also accessed earlier than syntactic information (but only based on peak latency). We discuss these findings with respect to models of bilingual word recognition and language comprehension in general. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhao, Pei; Li, Su; Zhao, Jing; Gaspar, Carl M; Weng, Xuchu
The N170 component of EEG evoked by visual words is an index of perceptual expertise for the visual word across different writing systems. In the present study, we investigated whether these N170 markers for Chinese, a very complex script, could emerge quickly after short-term learning (∼ 100 min) in young Chinese children, and whether early writing experience can enhance the acquisition of these neural markers for expertise. Two groups of preschool children received visual identification and free writing training respectively. Short-term character training resulted in selective enhancement of the N170 to characters, consistent with normal expert processing. Visual identification training resulted in increased N170 amplitude to characters in the right hemisphere, and N170 amplitude differences between characters and faces were decreased; whereas the amplitude difference between characters and tools increased. Writing training led to the disappearance of an initial amplitude difference between characters and faces in the right hemisphere. These results show that N170 markers for visual expertise emerge rapidly in young children after word learning, independent of the type of script young children learn; and visual identification and writing produce different effects. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Full Text Available The N170 component of EEG evoked by visual words is an index of perceptual expertise for the visual word across different writing systems. In the present study, we investigated whether these N170 markers for Chinese, a very complex script, could emerge quickly after short-term learning (∼100 min in young Chinese children, and whether early writing experience can enhance the acquisition of these neural markers for expertise. Two groups of preschool children received visual identification and free writing training respectively. Short-term character training resulted in selective enhancement of the N170 to characters, consistent with normal expert processing. Visual identification training resulted in increased N170 amplitude to characters in the right hemisphere, and N170 amplitude differences between characters and faces were decreased; whereas the amplitude difference between characters and tools increased. Writing training led to the disappearance of an initial amplitude difference between characters and faces in the right hemisphere. These results show that N170 markers for visual expertise emerge rapidly in young children after word learning, independent of the type of script young children learn; and visual identification and writing produce different effects.
Full Text Available In this study we investigated the intricate interplay between central linguistic processing and peripheral motor processes during typewriting. Participants had to typewrite two-constituent (noun-noun Finnish compounds in response to picture presentation while their typing behavior was registered. As dependent measures we used writing onset time to assess what processes were completed before writing and inter-key intervals to assess what processes were going on during writing. It was found that writing onset time was determined by whole word frequency rather than constituent frequencies, indicating that compound words are retrieved as whole orthographic units before writing is initiated. In addition, we found that the length of the first syllable also affects writing onset time, indicating that the first syllable is fully prepared before writing commences. The inter-key interval results showed that linguistic planning is not fully ready before writing, but cascades into the motor execution phase. More specifically, inter-key intervals were largest at syllable and morpheme boundaries, supporting the view that additional linguistic planning takes place at these boundaries. Bigram and trigram frequency also affected inter-key intervals with shorter intervals corresponding to higher frequencies. This can be explained by stronger memory traces for frequently co-occurring letter sequences in the motor memory for typewriting. These frequency effects were even larger in the second than in the first constituent, indicating that low-level motor memory starts to become more important during the course of writing compound words. We discuss our results in the light of current models of morphological processing and written word production.
Markonis, Dimitrios; Seco de Herrera, Alba G.; Eggel, Ivan; Müller, Henning
The biomedical literature published regularly has increased strongly in past years and keeping updated even in narrow domains is difficult. Images represent essential information of their articles and can help to quicker browse through large volumes of articles in connection with keyword search. Content-based image retrieval is helping the retrieval of visual content. To facilitate retrieval of visual information, image categorisation can be an important first step. To represent scientific articles visually, medical images need to be separated from general images such as flowcharts or graphs to facilitate browsing, as graphs contain little information. Medical modality classification is a second step to focus search. The techniques described in this article first classify images into broad categories. In a second step the images are further classified into the exact medical modalities. The system combines the Scale-Invariant Feature Transform (SIFT) and density-based clustering (DENCLUE). Visual words are first created globally to differentiate broad categories and then within each category a new visual vocabulary is created for modality classification. The results show the difficulties to differentiate between some modalities by visual means alone. On the other hand the improvement of the accuracy of the two-step approach shows the usefulness of the method. The system is currently being integrated into the Goldminer image search engine of the ARRS (American Roentgen Ray Society) as a web service, allowing concentrating image search onto clinically relevant images automatically.
McQueen, James M; Huettig, Falk
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.
Nikolaeva Julia E.
Full Text Available The music for three-part television movie Little Tragedies (1979 on Pushkin’s literature works (directed by M.Schweitzer, music composed by A.Schnittke has been investigated. The trinity of music, poetic words and visual imagery, and their amazing consistency and reciprocal functioning has been considered in aspect of polyphony as the universal logical principle of building an art form. All the music of the TV movie grows out of two leitmotifs. And theirs varied implementation in the film is exemplified on examples of polyphonic analysis (music/words/images of fragments from the four main film sections, such as "Scene from Faust", "Mozart and Salieri", "The Covetous Knight", and "A Feast in Time of Plague".
Full Text Available Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (####### and a 50 ms lowercase prime. Primes were either the same word (repeated or a different word (un-repeated than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical and N400 (lexical-semantic for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.
Levy, Jonathan; Vidal, Juan R; Oostenveld, Robert; FitzPatrick, Ian; Démonet, Jean-François; Fries, Pascal
The current state of empirical investigations refers to consciousness as an all-or-none phenomenon. However, a recent theoretical account opens up this perspective by proposing a partial level (between nil and full) of conscious perception. In the well-studied case of single-word reading, short-lived exposure can trigger incomplete word-form recognition wherein letters fall short of forming a whole word in one's conscious perception thereby hindering word-meaning access and report. Hence, the processing from incomplete to complete word-form recognition straightforwardly mirrors a transition from partial to full-blown consciousness. We therefore hypothesized that this putative functional bottleneck to consciousness (i.e. the perceptual boundary between partial and full conscious perception) would emerge at a major key hub region for word-form recognition during reading, namely the left occipito-temporal junction. We applied a real-time staircase procedure and titrated subjective reports at the threshold between partial (letters) and full (whole word) conscious perception. This experimental approach allowed us to collect trials with identical physical stimulation, yet reflecting distinct perceptual experience levels. Oscillatory brain activity was monitored with magnetoencephalography and revealed that the transition from partial-to-full word-form perception was accompanied by alpha-band (7-11 Hz) power suppression in the posterior left occipito-temporal cortex. This modulation of rhythmic activity extended anteriorly towards the visual word form area (VWFA), a region whose selectivity for word-forms in perception is highly debated. The current findings provide electrophysiological evidence for a functional bottleneck to consciousness thereby empirically instantiating a recently proposed partial perspective on consciousness. Moreover, the findings provide an entirely new outlook on the functioning of the VWFA as a late bottleneck to full-blown conscious word
Jorgensen, C. C.; Lee, D. D.
A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.
Pas, Maciej; Nakamura, Kimihiro; Sawamoto, Nobukatsu; Aso, Toshihiko; Fukuyama, Hidenao
Visual object recognition is generally known to be facilitated when targets are preceded by the same or relevant stimuli. For written words, however, the beneficial effect of priming can be reversed when primes and targets share initial syllables (e.g., "boca" and "bono"). Using fMRI, the present study explored neuroanatomical correlates of this negative syllabic priming. In each trial, participants made semantic judgment about a centrally presented target, which was preceded by a masked prime flashed either to the left or right visual field. We observed that the inhibitory priming during reading was associated with a left-lateralized effect of repetition enhancement in the inferior frontal gyrus (IFG), rather than repetition suppression in the ventral visual region previously associated with facilitatory behavioral priming. We further performed a second fMRI experiment using a classical whole-word repetition priming paradigm with the same hemifield procedure and task instruction, and obtained well-known effects of repetition suppression in the left occipito-temporal cortex. These results therefore suggest that the left IFG constitutes a fast word processing system distinct from the posterior visual word-form system and that the directions of repetition effects can change with intrinsic properties of stimuli even when participants' cognitive and attentional states are kept constant. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available The purpose of this study was to determine whether word processing might change a second language (L2 leamer's writing processes and improve the quality of his essays over a relatively long period of time. We worked from the assumption that research comparing word-processing to pen and paper composing tends to show positive results when studies include lengthy terms of data collection and when appropriate instruction and training are provided. We compared the processes and products of L2 composing displayed by a 29-year-old, male Mandarin leamer of English with intermediate proficiency in English while he wrote, over 8 months, 14 compositions grouped into 7 comparable pairs of topics altemating between uses of a lap-top computer and of pen and paper. Al1 keystrokes were recorded electronically in the computer environrnent; visual records of al1 text changes were made for the pen-and paper writing. Think-aloud protocols were recorded in al1 sessions. Analyses indicate advantages for the word-processing medium over the pen-and-paper medium in terms ofi a greater frequency of revisions made at the discourse level and at the syntactical level; higher scores for content on analytic ratings of the completed compositions; and more extensive evaluation ofwritten texts in think-aloud verbal reports.
Kuperman, Victor; Bertram, Raymond; Baayen, R. Harald
This eye-tracking study explores visual recognition of Dutch suffixed words (e.g., "plaats+ing" "placing") embedded in sentential contexts, and provides new evidence on the interplay between storage and computation in morphological processing. We show that suffix length crucially moderates the use of morphological properties. In words with shorter…
Shalhoub-Awwad, Yasmin; Leikin, Mark
This study investigated the effects of the Arabic root in the visual word recognition process among young readers in order to explore its role in reading acquisition and its development within the structure of the Arabic mental lexicon. We examined cross-modal priming of words that were derived from the same root of the target…
Dong, Jianfeng; Li, Xirong; Snoek, Cees G. M.
This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence...
Brysbaert, Marc; Keuleers, Emmanuel; New, Boris
In this Perspective Article we assess the usefulness of Google's new word frequencies for word recognition research (lexical decision and word naming). We find that, despite the massive corpus on which the Google estimates are based (131 billion words from books published in the United States alone), the Google American English frequencies explain 11% less of the variance in the lexical decision times from the English Lexicon Project (Balota et al., 2007) than the SUBTLEX-US word frequencies, based on a corpus of 51 million words from film and television subtitles. Further analyses indicate that word frequencies derived from recent books (published after 2000) are better predictors of word processing times than frequencies based on the full corpus, and that word frequencies based on fiction books predict word processing times better than word frequencies based on the full corpus. The most predictive word frequencies from Google still do not explain more of the variance in word recognition times of undergraduate students and old adults than the subtitle-based word frequencies.
Brysbaert, Marc; Keuleers, Emmanuel; New, Boris
In this Perspective Article we assess the usefulness of Google's new word frequencies for word recognition research (lexical decision and word naming). We find that, despite the massive corpus on which the Google estimates are based (131 billion words from books published in the United States alone), the Google American English frequencies explain 11% less of the variance in the lexical decision times from the English Lexicon Project (Balota et al., 2007) than the SUBTLEX-US word frequencies, based on a corpus of 51 million words from film and television subtitles. Further analyses indicate that word frequencies derived from recent books (published after 2000) are better predictors of word processing times than frequencies based on the full corpus, and that word frequencies based on fiction books predict word processing times better than word frequencies based on the full corpus. The most predictive word frequencies from Google still do not explain more of the variance in word recognition times of undergraduate students and old adults than the subtitle-based word frequencies. PMID:21713191
PICTURE -WORD INTERACTION: IMPLICATIONS FOR Scientific Interim Report SPEEDED ON-LINE PROCESSING AND DELAYED MEMORY Jn 1 - March 31, 1978 RETRIEVAL 6... picture -word comparisun studies may thus represent a small portion of lexical memory which may be the only subset of words that can be represented in a...Effect of a picture mask on memory for visual detail The following experiment is one of a series concernd with the extraction and encoding of information
Mueller, Christina J; Kuchinke, Lars
The exploratory study investigated individual differences in implicit processing of emotional words in a lexical decision task. A processing advantage for positive words was observed, and differences between happy and fear-related words in response times were predicted by individual differences in specific variables of emotion processing: Whereas more pronounced goal-directed behavior was related to a specific slowdown in processing of fear-related words, the rate of spontaneous eye blinks (indexing brain dopamine levels) was associated with a processing advantage of happy words. Estimating diffusion model parameters revealed that the drift rate (rate of information accumulation) captures unique variance of processing differences between happy and fear-related words, with highest drift rates observed for happy words. Overall emotion recognition ability predicted individual differences in drift rates between happy and fear-related words. The findings emphasize that a significant amount of variance in emotion processing is explained by individual differences in behavioral data.
Sauval, Karinne; Perre, Laetitia; Casalis, Séverine
The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…
Full Text Available The paper describes time-domain simulation of gear pitting damage using animation program. Key frames have been used to create illusion of motion. The animation uses experimental results of high-cycle fatigue of material. The fatigue damage occurs in the nominal creep area on the side of the gear tooth sample loaded with variable-positioned Hertz pressure. By applying the force, the pressure cumulates between two convex surfaces. This phenomenon results in material damage under of curved surfaces in contact. Moreover, further damage has been registered on the surface. This is due to exceeding the elastic-plastic state limit and development of „tabs“. The tabs serve as origin of surface micro cracks powered by shear stress and enclosed grease pressure as well. This deformation and extreme pressures of Pascal law contribute to elongation and growth of the surface micro crack. Non-homogenous parts of material volume support the initialization/development of the micro cracks as well. Resulting visualization of the tooth-side fatigue damage provides clear and easy-to-understand description of the damage development process right from the micro crack initialization to the final fragmentation due to pitting degradation.
Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976
Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.
Full Text Available Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.
Molinaro, Nicola; Conrad, Markus; Barber, Horacio A; Carreiras, Manuel
Electrical scalp recordings revealed the brain's sensitivity to both lexical properties of words and their contextual fit with a previous sentence context around 400 ms after word presentation. The so-called N400 component has been suggested to reflect the cost either of target word recognition or of a postlexical process for integrating word meaning into a context. In a sentence comprehension study, we manipulated the potential interference exerted in visual word recognition by target words' orthographic neighbors and the semantic constraints induced by the context in one and the same experiment. Neighbor frequency modulated the N400 only in low-constraint contexts; in high-constraint contexts the largely suppressed N400 did not show this neighbor interference effect. Furthermore, the earlier onset of the ERP effect (about 100 ms) induced by the contextual manipulation compared to the neighbor manipulation suggests distinct neurocognitive processes affecting the N400 component in an interactive manner.
Behrmann, Marlene; Plaut, David C
Considerable research has supported the view that faces and words are subserved by independent neural mechanisms located in the ventral visual cortex in opposite hemispheres. On this view, right hemisphere ventral lesions that impair face recognition (prosopagnosia) should leave word recognition unaffected, and left hemisphere ventral lesions that impair word recognition (pure alexia) should leave face recognition unaffected. The current study shows that neither of these predictions was upheld. A series of experiments characterizing speed and accuracy of word and face recognition were conducted in 7 patients (4 pure alexic, 3 prosopagnosic) and matched controls. Prosopagnosic patients revealed mild but reliable word recognition deficits, and pure alexic patients demonstrated mild but reliable face recognition deficits. The apparent comingling of face and word mechanisms is unexpected from a domain-specific perspective, but follows naturally as a consequence of an interactive, learning-based account in which neural processes for both faces and words are the result of an optimization procedure embodying specific computational principles and constraints.
Blenkhorn, P; Evans, G
This paper describes a novel method for automatically generating Braille documents from word-processed (Microsoft Word) documents. In particular it details how, by using the Word Object Model, the translation system can map the layout information (format) in the print document into an appropriate Braille equivalent.
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.
Full Text Available A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span, predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.
Huszár, Tamás; Makra, Emese; Hallgató, Emese; Janacsek, Karolina; Németh, Dezsö
Knowledge about how we process taboo words brings us closer to the and emotional processes, and broadens the interpretative framework in psychiatry and psychotherapy. In this study the lexical decision paradigm was used. Subjects were presented neutral words, taboo words and pseudowords in a random order, and they had to indicate whether the presented word was meaningful (neutral and taboo words) or meaningless (pseudowords). Each target word was preceded by a prime word (either taboo or neutral). SOA differed in the two experimental conditions (it was 250 msec in the experimental group, and 500 msec in the control group). In the experimental group, response latencies increased for target words that were preceded by taboo prime words, as compared to those that were preceded by neutral prime words. In the control group prime had no such differential effects on response latencies. Results indicate that emotional processing of taboo words occur very early and the negative effect of taboo words on the following lexical decision fades away in 500 msec. Our experiment and other empirical data are presented in this paper.
White, Sarah J.; Hirotani, Masako; Liversedge, Simon P.
Two experiments are presented that examine how the visual characteristics of Japanese words influence eye movement behaviour during reading. In Experiment 1, reading behaviour was compared for words comprising either one or two kanji characters. The one-character words were significantly less likely to be fixated on first-pass, and had…
Hsiao, Janet Hui-Wen
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. Copyright © 2011 Elsevier Inc. All rights reserved.
Grossi, Giordana; Coch, Donna
Five prime types (unrelated words, pronounceable nonwords, illegal strings of letters, false fonts, or neutral strings of Xs) preceded word and nonword targets in a masked priming study designed to investigate word form processing as indexed by event-related potentials (ERPs). Participants performed a lexical decision task on targets. In the 150-250-ms epoch at fronto-central, central, and temporo-parietal sites ERPs were smallest to targets preceded by words and nonwords, followed by letter strings, false fonts, and finally neutral primes. This refractory pattern sensitive to orthography supports the view that ERPs in the 150-250-ms epoch index activation of neural systems involved in word form processing and suggests that such activation may be graded, being maximal with word-like stimuli and relatively reduced with alphabet-like stimuli. Further, these results from a masked priming paradigm confirm the automatic nature of word form processing.
Aschenbrenner, Andrew J; Balota, David A; Weigand, Alexandra J; Scaltritti, Michele; Besner, Derek
A prominent question in visual word recognition is whether letters within a word are processed in parallel or in a left to right sequence. Although most contemporary models posit parallel processing, this notion seems at odds with well-established serial position effects in word identification that indicate preferential processing for the initial letter. The present study reports 4 experiments designed to further probe the locus of the first position processing advantage. The paradigm involved masked target words presented for short durations and required participants to subsequently select from 2 alternatives, 1 which was identical to the target and 1 that differed by a single letter. Experiment 1 manipulated the case between the target and the alternatives to ensure that previous evidence for a first position effect was not due to simple perceptual matching. The results continued to yield a robust first position advantage. Experiment 2 attempted to eliminate postperceptual decision processes as the explanatory mechanism by presenting single letters as targets and requiring participants to select an entire word that contained the target letter at different positions. Here the first position advantage was eliminated, suggesting postperceptual decision processes do not underlie the effect. The final 2 experiments presented masked stimuli either all vertically (Experiment 3) or randomly intermixed vertical and horizontal orientation (Experiment 4). In both cases, a robust first position advantage was still obtained. The authors consider alternative interpretations of this effect and suggest that these results are consistent with a rapid deployment of spatial attention to the beginning of a target string which occurs poststimulus onset. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
McQueen, J.M.; Hüttig, F.
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase
Lanzagorta, Marco O.; Rosenberg, Robert O.; Trafton, Greg
What makes a graphic image a good visualization. Why is one visualization better than another. Why are 3D visualizations better than 2D visualizations in some cases but not others. How does the size of the display, color, contrast level, brightness or frame rate affect the usability of the visualization, and how do these 'physical' quantities affect the type and amount of information that can be extracted from the visualization by the user. These are just a few of the questions that a multi-disciplinary effort at the NRL are trying to answer. By combining visualization experts, physicist and cognitive scientists, awe are trying to understand the cognitive processes carried out in the minds of scientists at the time they perform a visual analysis of their data. The results from this project are being used for the design of visualization methodologies and basic cognitive work. In this paper, we present a general description of our project and a brief discussion of the results obtained trying to understand why 3D visualizations are sometimes better than 2D, as most of the attempts at studying this problem have resulted in theories that are either to vague or under-specified, or not informative across different contexts.
Full Text Available Effects reflecting serial within-word processing are frequently found in pseudo- and non-word recognition tasks not only among fluent, but especially among dyslexic readers. However, the time course and locus of these serial within-word processing effects in the cognitive hierarchy (i.e., orthographic, phonological, lexical have remained elusive. We studied whether a subject's eye movements during a lexical decision task would provide information about the temporal dynamics of serial within-word processing. We assumed that if there is serial within-word processing proceeding from left to right, items with informative beginnings would attract the gaze position and (micro-saccadic eye movements earlier in time relative to those with informative endings. In addition, we compared responses to word, non-word, and pseudo-word items to study whether serial within-word processing stems mainly from a lexical, orthographic, or phonological processing level, respectively. Gaze positions showed earlier responses to anomalies located at pseudo- and non-word beginnings rather than endings, whereas informative word beginnings or endings did not affect gaze positions. The overall pattern of results suggests parallel letter processing of real words and rapid serial within-word processing when reading novel words. Dysfluent readers' gaze position responses toward anomalies located at pseudo- and non-word endings were delayed substantially, suggesting impairment in serial processing at an orthographic processing level.
Cai, Wei; Lee, Benny P. H.
This study examines strategies (inferencing and ignoring) and knowledge sources (semantics, morphology, paralinguistics, etc.) that second language learners of English use to process unfamiliar words in listening comprehension and whether the use of strategies or knowledge sources relates to successful text comprehension or word comprehension.…
Purcell, Jeremy J; Shea, Jennifer; Rapp, Brenda
Lexical orthographic information provides the basis for recovering the meanings of words in reading and for generating correct word spellings in writing. Research has provided evidence that an area of the left ventral temporal cortex, a subregion of what is often referred to as the visual word form area (VWFA), plays a significant role specifically in lexical orthographic processing. The current investigation goes beyond this previous work by examining the neurotopography of the interface of lexical orthography with semantics. We apply a novel lesion mapping approach with three individuals with acquired dysgraphia and dyslexia who suffered lesions to left ventral temporal cortex. To map cognitive processes to their neural substrates, this lesion mapping approach applies similar logical constraints to those used in cognitive neuropsychological research. Using this approach, this investigation: (a) identifies a region anterior to the VWFA that is important in the interface of orthographic information with semantics for reading and spelling; (b) determines that, within this orthography-semantics interface region (OSIR), access to orthography from semantics (spelling) is topographically distinct from access to semantics from orthography (reading); (c) provides evidence that, within this region, there is modality-specific access to and from lexical semantics for both spoken and written modalities, in both word production and comprehension. Overall, this study contributes to our understanding of the neural architecture at the lexical orthography-semantic-phonological interface within left ventral temporal cortex.
Hoedemaker, Renske S; Gordon, Peter C
In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Snell, Joshua; Meeter, Martijn; Grainger, Jonathan
A hotly debated issue in reading research concerns the extent to which readers process parafoveal words, and how parafoveal information might influence foveal word recognition. We investigated syntactic word processing both in sentence reading and in reading isolated foveal words when these were flanked by parafoveal words. In Experiment 1 we found a syntactic parafoveal preview benefit in sentence reading, meaning that fixation durations on target words were decreased when there was a syntactically congruent preview word at the target location (n) during the fixation on the pre-target (n-1). In Experiment 2 we used a flanker paradigm in which participants had to classify foveal target words as either noun or verb, when those targets were flanked by syntactically congruent or incongruent words (stimulus on-time 170 ms). Lower response times and error rates in the congruent condition suggested that higher-order (syntactic) information can be integrated across foveal and parafoveal words. Although higher-order parafoveal-on-foveal effects have been elusive in sentence reading, results from our flanker paradigm show that the reading system can extract higher-order information from multiple words in a single glance. We propose a model of reading to account for the present findings.
Gullick, Margaret M; Mitra, Priya; Coch, Donna
Previous event-related potential studies have indicated that both a widespread N400 and an anterior N700 index differential processing of concrete and abstract words, but the nature of these components in relation to concreteness and imagery has been unclear. Here, we separated the effects of word concreteness and task demands on the N400 and N700 in a single word processing paradigm with a within-subjects, between-tasks design and carefully controlled word stimuli. The N400 was larger to concrete words than to abstract words, and larger in the visualization task condition than in the surface task condition, with no interaction. A marked anterior N700 was elicited only by concrete words in the visualization task condition, suggesting that this component indexes imagery. These findings are consistent with a revised or extended dual coding theory according to which concrete words benefit from greater activation in both verbal and imagistic systems. Copyright © 2013 Society for Psychophysiological Research.
Parker, Andrew; Dagnall, Neil
Two experiments are presented that investigate the effects of dynamic visual noise (DVN) on memory for concrete and abstract words. Memory for concrete words is typically superior to that of abstract words and is referred to as the concreteness effect. DVN is a procedure that has been demonstrated to interfere selectively with visual working memory and the generation of images from long-term memory. It was reasoned that if concreteness effects arise because of the ability of the latter to activate visual representations, then DVN should selectively impair memory for concrete words. Experiment 1 found DVN to selectively reduce free recall of concrete words. Experiment 2 investigated recognition memory and found DVN to reduce memory accuracy and remember responses, while increasing know responses to concrete words.
Mayer, Katja M; Yildiz, Izzet B; Macedonia, Manuela; von Kriegstein, Katharina
At present, it is largely unclear how the human brain optimally learns foreign languages. We investigated teaching strategies that utilize complementary information ("enrichment"), such as pictures or gestures, to optimize vocabulary learning outcome. We found that learning while performing gestures was more efficient than the common practice of learning with pictures and that both enrichment strategies were better than learning without enrichment ("verbal learning"). We tested the prediction of an influential cognitive neuroscience theory that provides explanations for the beneficial behavioral effects of enrichment: the "multisensory learning theory" attributes the benefits of enrichment to recruitment of brain areas specialized in processing the enrichment. To test this prediction, we asked participants to translate auditorily presented foreign words during fMRI. Multivariate pattern classification allowed us to decode from the brain activity under which enrichment condition the vocabulary had been learned. The visual-object-sensitive lateral occipital complex (LOC) represented auditory words that had been learned with pictures. The biological motion superior temporal sulcus (bmSTS) and motor areas represented auditory words that had been learned with gestures. Importantly, brain activity in these specialized visual and motor brain areas correlated with behavioral performance. The cortical activation pattern found in the present study strongly supports the multisensory learning theory in contrast to alternative explanations. In addition, the results highlight the importance of learning foreign language vocabulary with enrichment, particularly with self-performed gestures. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bobrik, Ralph; Reichert, M.U.; Bauer, Thomas
In large organizations different users or user roles have distinguished perspectives over business processes and related data. Personalized views of the managed processes are needed. Existing BPM tools, however, do not provide adequate mechanisms for building and visualizing such views. Very often
Kevin J.Y. Lam
Full Text Available Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1 action features, (2 visual features, or (3 semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100 ms, 250 ms, 400 ms, and 1,000 ms to determine the relative time course of the different features . Notably, action priming effects were found in ISIs of 100 ms, 250 ms, and 1,000 ms whereas a visual priming effect was seen only in the ISI of 1,000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1 demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2 provides new evidence for embodied theories of language.
Lam, Kevin J Y; Dijkstra, Ton; Rueschemeyer, Shirley-Ann
Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language.
This research focuses on young children's experiences of the visual mode embedded in new multimodal literacy practices. An enquiry was undertaken into the role of visual and digital images in a group of 11 four-year-olds' out-of-school lives. The children photographed their use of a range of primarily visual-based media at home, to produce a book…
Full Text Available Previous work suggests that, when attended, pictures may be processed more readily than words. The current study extends this research to assess potential differences in processing between these stimulus types when they are actively ignored. In a dual-task paradigm, facilitated recognition for previously ignored words was found provided that they appeared frequently with an attended target. When adapting the same paradigm here, previously unattended pictures were recognized at high rates regardless of how they were paired with items during the primary task, whereas unattended words were later recognized at higher rates only if they had previously been aligned with primary task targets. Implicit learning effects obtained by aligning unattended items with attended task-targets may apply only to conceptually abstract stimulus types, such as words. Pictures, on the other hand, may maintain direct access to semantic information, and are therefore processed more readily than words, even when being actively ignored.
Walker, Maegen; Ciraolo, Margeaux; Dewald, Andrew; Sinnett, Scott
Previous work suggests that, when attended, pictures may be processed more readily than words. The current study extends this research to assess potential differences in processing between these stimulus types when they are actively ignored. In a dual-task paradigm, facilitated recognition for previously ignored words was found provided that they appeared frequently with an attended target. When adapting the same paradigm here, previously unattended pictures were recognized at high rates regardless of how they were paired with items during the primary task, whereas unattended words were later recognized at higher rates only if they had previously been aligned with primary task targets. Implicit learning effects obtained by aligning unattended items with attended task-targets may apply only to conceptually abstract stimulus types, such as words. Pictures, on the other hand, may maintain direct access to semantic information, and are therefore processed more readily than words, even when being actively ignored.
Francis, Wendy S; MacLeod, Colin M; Taylor, Randolph S
We conducted four Stroop color-word experiments to examine how multiple stimuli influence interference. Experiments 1a and 1b showed that interference was strong when the word and color were integrated, and that visual and auditory words made independent contributions to interference when these words had different meanings. Experiments 2 and 3 confirmed this pattern when the word information and color information were not integrated, and hence when overall interference was substantially less. Auditory and visual interference effects are comparable except when the visual distracter is integrated with the color, in which case interference is substantially enhanced. Overall, these results are interpreted as being most consistent with a joint influence account of interference as opposed to a capture account.
Brink, D. van den
The aim of this thesis was to gain more insight into spoken-word comprehension and the influence of sentence-contextual information on these processes using ERPs. By manipulating critical words in semantically constraining sententes, in semantic or syntactic sense, and examining the consequences in
Candan, Ayse; Kuntay, Aylin C.; Yeh, Ya-ching; Cheung, Hintat; Wagner, Laura; Naigles, Letitia R.
We compare the processing of transitive sentences in young learners of a strict word order language (English) and two languages that allow noun omissions and many variant word orders: Turkish, a case-marked language, and Mandarin Chinese, a non case-marked language. Children aged 1-3 years listened to simple transitive sentences in the typical…
Francis, Wendy S.; Duran, Gabriela; Augustini, Beatriz K.; Luevano, Genoveva; Arzate, Jose C.; Saenz, Silvia P.
Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish-English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial…
Examines the effects of word processing on writing quality and the amount of text produced by basic writers. Finds that students using computers wrote more, but that there was no difference in quality between those who used a word processor and those who did not. (MS)
Wallentin, Mikkel; Gravholt, Claus Højbjerg; Skakkebæk, Anne
Competing theories attempt to explain the function of Broca's area in single word processing. Studies have found the region to be more active during processing of pseudo words than real words and during infrequent words relative to frequent words and during Stroop (incongruent) color words compared...... displayed in green or red (incongruent vs congruent colors). One of the colors, however, was presented three times as often as the other, making it possible to study both congruency and frequency effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible...
Amsel, Ben D; Kutas, Marta; Coulson, Seana
In grapheme-color synesthesia, seeing particular letters or numbers evokes the experience of specific colors. We investigate the brain's real-time processing of words in this population by recording event-related brain potentials (ERPs) from 15 grapheme-color synesthetes and 15 controls as they judged the validity of word pairs ('yellow banana' vs. 'blue banana') presented under high and low visual contrast. Low contrast words elicited delayed P1/N170 visual ERP components in both groups, relative to high contrast. When color concepts were conveyed to synesthetes by individually tailored achromatic grapheme strings ('55555 banana'), visual contrast effects were like those in color words: P1/N170 components were delayed but unchanged in amplitude. When controls saw equivalent colored grapheme strings, visual contrast modulated P1/N170 amplitude but not latency. Color induction in synesthetes thus differs from color perception in controls. Independent from experimental effects, all orthographic stimuli elicited larger N170 and P2 in synesthetes than controls. While P2 (150-250ms) enhancement was similar in all synesthetes, N170 (130-210ms) amplitude varied with individual differences in synesthesia and visual imagery. Results suggest immediate cross-activation in visual areas processing color and shape is most pronounced in so-called projector synesthetes whose concurrent colors are experienced as originating in external space.
Álvarez, Carlos J.; Garcia-Saavedra, Guacimara; Luque, Juan L.; Taft, Marcus
Some inconsistency is observed in the results from studies of reading development regarding the role of the syllable in visual word recognition, perhaps due to a disparity between the tasks used. We adopted a word-spotting paradigm, with Spanish children of second grade (mean age: 7 years) and sixth grade (mean age: 11 years). The children were…
Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih
The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.
Full Text Available The present study investigates the applicability of word association model to the second language word processing abilities of Kurdish learners of Persian. The aim of this study was to examine whether beginning L2 learners use their L1 as a mediating tool to process L2 words, or whether pictures representing pre-existing concepts facilitate L2 word processing. 10 Kurdish-Persian bilingual adults at the beginning stages of learning Persian were compared with 10 native speakers of Persian who were fluent in Kurdish. Participants in two groups performed a translation-recognition task. They had to decide whether words in two languages were translation equivalents. They were also compared in a picture recognition task in order to compare the reaction times (RTs of L1-L2 and picture-L2.The findings showed that Kurdish- Persian bilinguals performed faster in L1-L2 than in picture-L2 but the performance of Persian- Kurdish bilinguals were comparable on both L1-L2 and picture-L2, predicted by the word association model. These results suggested that L1 and pictures have different effects on the word processing abilities of bilinguals.
Full Text Available Written word recognition is a sine qua non of reading. The acquisition and development of word recognition requires the synergistic working of multiple factors and processes. In this study, developmental and expert models of reading that explain the mechanisms underlying the acquisition and expert performance on this important skill are examined. Likewise, reading brain development and the implied cognitive processes are also addressed, as a mean for a better understanding of reading typical development as well as reading disabilities.
Written word recognition is a sine qua non of reading. The acquisition and development of word recognition requires the synergistic working of multiple factors and processes. In this study, developmental and expert models of reading that explain the mechanisms underlying the acquisition and expert performance on this important skill are examined. Likewise, reading brain development and the implied cognitive processes are also addressed, as a mean for a better understanding of reading typical ...
Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella
Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.
The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.
The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…
Wykowska, Agnieszka; Hommel, Bernhard; Schubö, Anna
In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing - an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters.
Ding, Jinfeng; Wang, Lin; Yang, Yufang
In the present study, we aimed to examine how the emotionality of words influences online sentence processing-specifically, the influence of emotional words on the processing of following words in sentences. We manipulated the emotionality of verbs as well as the orthographic correctness of their following (neutral) object nouns, so that the orthographic violation of the (neutral) nouns occurred in either emotional or neutral sentences. Event-related potentials (ERPs) were recorded to both the nouns and the verbs. We found that the orthographic violation of the nouns elicited a P2 and an N400 effect in the emotionally neutral sentences, but an LPC effect in the emotionally charged sentences. We also found that the emotional verbs elicited a larger N1, a larger P2, and a larger N400 than did the neutral verbs. The ERP results suggest that emotional words capture more attention than neutral words, which further affects early orthographic analysis of the following words. Our findings demonstrate a dynamic influence of emotional words on sentence processing.
M Dorothee Augustin
Full Text Available Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica 139 187–201. Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms.
Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan
Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica139 187-201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms.
Full Text Available The temporal dynamics and anatomical correlates underlying human visual cognition are traditionally assessed as a function of stimulus properties and task demands. Any non-stimulus related activity is commonly dismissed as noise and eliminated to extract an evoked signal that is only a small fraction of the magnitude of the measured signal. We review studies that challenge this view by showing that non-stimulus related activity is not mere noise but that it has a well structured organization which can largely determine the processing of upcoming stimuli. We review evidence from human electrophysiology that shows how different aspects of pre-stimulus activity such as pre-stimulus EEG frequency power and phase and pre-stimulus EEG microstates can determine qualitative and quantitative properties of both lower and higher level visual processing. These studies show that low-level sensory processes depend on the momentary excitability of sensory cortices whereas perceptual processes leading to stimulus awareness depend on momentary pre-stimulus activity in higher-level non-visual brain areas. Speed and accuracy of stimulus identification have likewise been shown to be modulated by pre-stimulus brain states.
Chen, Yi-Chuan; Spence, Charles
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…
Alan C-N Wong
Full Text Available Perceptual expertise has been studied intensively with faces and object categories involving detailed individuation. A common finding is that experience in fulfilling the task demand of fine, subordinate-level discrimination between highly similar instances is associated with the development of holistic processing. This study examines whether holistic processing is also engaged by expert word recognition, which is thought to involve coarser, basic-level processing that is more part-based. We adopted a paradigm widely used for faces--the composite task, and found clear evidence of holistic processing for English words. A second experiment further showed that holistic processing for words was sensitive to the amount of experience with the language concerned (native vs. second-language readers and with the specific stimuli (words vs. pseudowords. The adoption of a paradigm from the face perception literature to the study of expert word perception is important for further comparison between perceptual expertise with words and face-like expertise.
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.
Full Text Available With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR, high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT and Speeded-Up Robust Features (SURF. The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.
Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard
Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as up, influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious information is generally rare, unconscious meaning stemming from only 1 particular modality could, in principle, be available for other modalities. Also, on the basis of known influences and dependencies of meaning on sensory information processing, such an unconscious meaning-based effect could impact sensory processing in a different modality. In 3 experiments, this prediction was confirmed. We found that an unconscious spatial word, such as up, facilitated position discrimination of a spatially congruent sound (here, a sound from above) as compared to a spatially incongruent sound (here, from below). This was found even though participants did not recognize the meaning of the primes. The results show that unconscious processing extends to semantic-sensory connections between different modalities. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kensinger, Elizabeth A; Schacter, Daniel L
There is considerable debate regarding the extent to which limbic regions respond differentially to items with different valences (positive or negative) or to different stimulus types (pictures or words). In the present event-related fMRI study, 21 participants viewed words and pictures that were neutral, negative, or positive. Negative and positive items were equated on arousal. The participants rated each item for whether it depicted or described something animate or inanimate or something common or uncommon. For both pictures and words, the amygdala, dorsomedial prefrontal cortex (PFC), and ventromedial PFC responded equally to all high-arousal items, regardless of valence. Laterality effects in the amygdala were based on the stimulus type (word = left, picture = bilateral). Valence effects were most apparent when the individuals processed pictures, and the results revealed a lateral/medial distinction within the PFC: The lateral PFC responded differentially to negative items, whereas the medial PFC was more engaged during the processing of positive pictures.
Carretié, Luis; Hinojosa, José A; Albert, Jacobo; López-Martín, Sara; De La Gándara, Belén S; Igoa, José M; Sotillo, María
Contrary to what occurs with negative pictures, negative words are, in general, not capable of interfering with performance in ongoing cognitive tasks in normal subjects. A probable explanation is the limited arousing power of linguistic material. Especially intense words (insults and compliments), neutral personal adjectives, and pseudowords were presented to 28 participants while they executed a lexical decision task. Insults were associated with the poorest performance in the task and compliments with the best. Amplitude of the late positive component of the event-related potentials, originating at parietal areas, was maximal in response to compliments and insults, but latencies were delayed in response to the latter. Results suggest that intense emotional words modulate ongoing cognitive processes through both bottom-up (attentional capture by insults) and top-down (facilitation of cognitive processing by arousing words) mechanisms.
Full Text Available In line with the Theory of Event Coding (Hommel et al., 2001, action planning has been shown to affect perceptual processing—an effect that has been attributed to a so-called intentional weighting mechanism (Memelink & Hommel, in press; Wykowska, Schubö, & Hommel, 2009, whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010. The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters.
Domínguez, Alberto; Alija, Maira; Cuetos, Fernando; de Vega, Mauel
Behavioral measures in visual priming tasks show opposite effects for syllables and morphemes, which indicate that they are processed by two independent systems. We used event related potentials (ERPs) to explore two priming situations in Spanish: prefix related words (reacción-REFORMA [reaction-reform]), in which prime and target words shared a first syllable that was also a prefix, and syllable related words (regalo-REFORMA [gift-reform.]), in which the shared first syllable was a pseudoprefix in the prime word. Prefix related pairs, unlike syllable related pairs, evoked a very early positivity in reaction to the target (at 150-250ms window), suggesting that the prefix information is immediately available, at a prelexical stage. By contrast, syllable related pairs showed a larger N400 effect. This late negativity may be caused by lateral inhibition among lexical candidates activated in the lexicon by the prime's first syllable.
Several previous studies showed that synthetic vowel identification is more difficult for voices with a high f0 (the lowest frequency that defines voice pitch), but it is not clear whether this means that female voices, which generally have a higher f0, are processed more slowly than male voices. A word spotting experiment was conducted with 25 French native listeners (8 men, 17 women; M age = 27.6 yr., SD = 10.8). Words produced by four male and four female speakers were played to the participants. Their task was to press a button every time they identified the target word "étage." Response times were collected and compared in four different conditions: male voice preceded by male voices, female voice preceded by female voices, male voice preceded by female voices, and female voice preceded by male voices. Results showed that both sexes' voices were processed equally fast. Moreover, no significant correlation was found between mean f0 of the target word and response time. Nevertheless, when a target word produced by a male speaker occurred after several words produced by a female speaker (or vice-versa) the listener's RT decreased, suggesting that male and female voices are processed as two different entities.
Full Text Available The present study investigated the electrophysiological correlates of morphological processing in Chinese compound word reading using a delayed repetition priming paradigm. Participants were asked to passively view lists of two-character compound words containing prime-target pairs separated by a few items. In a Whole Word repetition condition, the prime and target were the same real words (e.g., 经理-经理, manager-manager. In a Constituent repetition condition, the prime and target were swapped in terms of their constituent position (e.g., 士护-护士, the former is a pseudo-word and the later means nurse. Two ERP components including N200 and N400 showed repetition effects. The N200 showed a negative shift upon repetition in the Whole Word condition but this effect was delayed for the Constituent condition. The N400 showed comparable amplitude reduction across the two priming conditions. The results reveal different aspects of morphological processing with an early stage associated with N200 and a late stage with N400. There was also a possibility that the N200 effect reflect general cognitive processing, i.e., the detection of low-probability stimuli.
Ponari, Marta; Rodríguez-Cuadrado, Sara; Vinson, David; Fox, Neil; Costa, Albert; Vigliocco, Gabriella
Effects of emotion on word processing are well established in monolingual speakers. However, studies that have assessed whether affective features of words undergo the same processing in a native and nonnative language have provided mixed results: Studies that have found differences between native language (L1) and second language (L2) processing attributed the difference to the fact that L2 learned late in life would not be processed affectively, because affective associations are established during childhood. Other studies suggest that adult learners show similar effects of emotional features in L1 and L2. Differences in affective processing of L2 words can be linked to age and context of learning, proficiency, language dominance, and degree of similarity between L2 and L1. Here, in a lexical decision task on tightly matched negative, positive, and neutral words, highly proficient English speakers from typologically different L1s showed the same facilitation in processing emotionally valenced words as native English speakers, regardless of their L1, the age of English acquisition, or the frequency and context of English use. (c) 2015 APA, all rights reserved).
Syssau, Arielle; Laxén, Jannika
The aim of this study was to expand our knowledge of the influence of emotional valence on visual word recognition by answering two questions. The first was to examine whether the emotional valence effect is sensitive to different types of task requirements, and the second was to examine whether words polysemy can modulate the effect of emotional valence. For this purpose, we manipulated orthogonally emotional valence (negative, positive and neutral words) and polysemy (polysemous vs. non polysemous words) in two versions of the lexical-decision task (one with legal nonwords and one with illegal nonwords). Results showed an effect of task: emotional valence and polysemy influenced lexical decision latencies only in the legal version of the lexical-decision task. Furthermore, results showed that the effect of polysemy was dependant on emotional valence. We observed a facilitation of polysemy for neutral words but not for emotional ones. Finally this experiment also showed that polysemy modulates the emotional valence effect. The facilitation observed for non polysemous emotional words compared to non polysemous neutral words disappeared for polysemous words. These findings fit with other studies showing facilitation for emotional word recognition and allow conclusions concerning the role of semantics on emotional word recognition.
Full Text Available Many neurocognitive studies investigated the neural correlates of visual word recognition, some of which manipulated the orthographic neighborhood density of words and nonwords believed to influence the activation of orthographically similar representations in a hypothetical mental lexicon. Previous neuroimaging research failed to find evidence for such global lexical activity associated with neighborhood density. Rather, effects were interpreted to reflect semantic or domain general processing. The present fMRI study revealed effects of lexicality, orthographic neighborhood density and a lexicality by orthographic neighborhood density interaction in a silent reading task. For the first time we found greater activity for words and nonwords with a high number of neighbors. We propose that this activity in the dorsomedial prefrontal cortex reflects activation of orthographically similar codes in verbal working memory thus providing evidence for global lexical activity as the basis of the neighborhood density effect. The interaction of lexicality by neighborhood density in the ventromedial prefrontal cortex showed lower activity in response to words with a high number compared to nonwords with a high number of neighbors. In the light of these results the facilitatory effect for words and inhibitory effect for nonwords with many neighbors observed in previous studies can be understood as being due to the operation of a fast-guess mechanism for words and a temporal deadline mechanism for nonwords as predicted by models of visual word recognition. Furthermore, we propose that the lexicality effect with higher activity for words compared to nonwords in inferior parietal and middle temporal cortex reflects the operation of an identification mechanism and based on local lexico-semantic activity.
Full Text Available We tested current models of morphological processing in reading with data from four visual lexical decision experiments using German compounds and monomorphemic words. Triplets of two semantically transparent noun-noun compounds and one monomorphemic noun were used in Experiments 1a and 1b. Stimuli within a triplet were matched for full-form frequency. The frequency of the compounds’ constituents was varied. The compounds of a triplet shared one constituent, while the frequency of the unshared constituent was either high or low, but always higher than full-form frequency. Reactions were faster to compounds with high-frequency constituents than to compounds with low-frequency constituents, while the latter did not differ from the monomorphemic words. This pattern was not influenced by task difficulty, induced by the type of pseudocompounds used. Pseudocompounds were either created by altering letters of an existing compound (easy pseudocompound, Experiment 1a or by combining two free morphemes into a non-existing, but morphologically legal, compound (difficult pseudocompound, Experiment 1b. In Experiment 2a and 2b, frequency-matched pairs of semantically opaque noun-noun compounds and simple nouns were tested. In Experiment 2a, with easy pseudocompounds (of the same type as in Experiment 1a, a reaction-time advantage for compounds over monomorphemic words was again observed. This advantage disappeared in Experiment 2b, where difficult pseudocompounds were used. Although a dual-route might account for the data, the findings are best understood in terms of decomposition of low-frequency complex words prior to lexical access, followed by processing costs due to the recombination of morphemes for meaning access. These processing costs vary as a function of intrinsic factors such as semantic transparency, or external factors such as the difficulty of the experimental task.
Aurel Ion Clinciu
Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.
Mullins, Carolyn J.; West, Thomas W.
At Indiana University the demand for word processing, the burden on the university's text processors, and the variety of commercial equipment demanded detailed, systematic technical and social planning. In the process of studying available equipment and software, the university's Office Systems Groups developed tools for evaluating the technology.…
Ana Paula Soares
Full Text Available Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias". Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd and 4(th Grade children and skilled adult readers (college students using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE], a consonant-preserving prime (e.g., pureto-PIRATA, a vowel-preserving prime (e.g., gicala-PIRATA, or an unrelated prime (e.g., bocelo -PIRATA. Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th Grade children, whereas 2(nd graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental models of visual-word recognition and reading.
Schlochtermeier, Lorna H.; Kuchinke, Lars; Pehrs, Corinna; Urton, Karolina; Kappelhoff, Hermann; Jacobs, Arthur M.
Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity. PMID:23409009
Bar-Kochva, Irit; Hasselhorn, Marcus
This study set out to examine the effects of a morpheme-based training on reading and spelling in fifth and sixth graders (N = 47), who present poor literacy skills and speak German as a second language. A computerized training, consisting of a visual lexical decision task (comprising 2,880 items, presented in 12 sessions), was designed to encourage fast morphological analysis in word processing. The children were divided between two groups: the one underwent a morpheme-based training, in which word-stems of inflections and derivations were presented for a limited duration, while their pre- and suffixes remained on screen until response. Another group received a control training consisting of the same task, except that the duration of presentation of a non-morphological unit was restricted. In a Word Disruption Task, participants read words under three conditions: morphological separation (with symbols separating between the words' morphemes), non-morphological separation (with symbols separating between non-morphological units of words), and no-separation (with symbols presented at the beginning and end of each word). The group receiving the morpheme-based program improved more than the control group in terms of word reading fluency in the morphological condition. The former group also presented similar word reading fluency after training in the morphological condition and in the no-separation condition, thereby suggesting that the morpheme-based training contributed to the integration of morphological decomposition into the process of word recognition. At the same time, both groups similarly improved in other measures of word reading fluency. With regard to spelling, the morpheme-based training group showed a larger improvement than the control group in spelling of trained items, and a unique improvement in spelling of untrained items (untrained word-stems integrated into trained pre- and suffixes). The results further suggest some contribution of the morpheme
Forrin, Noah D; MacLeod, Colin M
In three experiments, we tested a relative-speed-of-processing account of color-word contingency learning, a phenomenon in which color identification responses to high-contingency stimuli (words that appear most often in particular colors) are faster than those to low-contingency stimuli. Experiment 1 showed equally large contingency-learning effects whether responding was to the colors or to the words, likely due to slow responding to both dimensions because of the unfamiliar mapping required by the key press responses. For Experiment 2, participants switched to vocal responding, in which reading words is considerably faster than naming colors, and we obtained a contingency-learning effect only for color naming, the slower dimension. In Experiment 3, previewing the color information resulted in a reduced contingency-learning effect for color naming, but it enhanced the contingency-learning effect for word reading. These results are all consistent with contingency learning influencing performance only when the nominally irrelevant feature is faster to process than the relevant feature, and therefore are entirely in accord with a relative-speed-of-processing explanation.
Hargreaves, Ian S; Pexman, Penny M; Zdrazilova, Lenka; Sargious, Peter
Competitive Scrabble is an activity that involves extraordinary word recognition experience. We investigated whether that experience is associated with exceptional behavior in the laboratory in a classic visual word recognition paradigm: the lexical decision task (LDT). We used a version of the LDT that involved horizontal and vertical presentation and a concreteness manipulation. In Experiment 1, we presented this task to a group of undergraduates, as these participants are the typical sample in word recognition studies. In Experiment 2, we compared the performance of a group of competitive Scrabble players with a group of age-matched nonexpert control participants. The results of a series of cognitive assessments showed that the Scrabble players and control participants differed only in Scrabble-specific skills (e.g., anagramming). Scrabble expertise was associated with two specific effects (as compared to controls): vertical fluency (relatively less difficulty judging lexicality for words presented in the vertical orientation) and semantic deemphasis (smaller concreteness effects for word responses). These results suggest that visual word recognition is shaped by experience, and that with experience there are efficiencies to be had even in the adult word recognition system.
Habekost, Thomas; vogel, asmus; Rostrup, Egill
of the speed of a particular psychological process that are not confounded by the speed of other processes. We used Bundesen's (1990) Theory of Visual Attention (TVA) to obtain specific estimates of processing speed in the visual system controlled for the influence of response latency and individual variations...... dramatic aging effects were found for the perception threshold and the visual apprehension span. In the visual domain, cognitive aging seems to be most clearly related to reductions in processing speed....
Full Text Available The present study utilized functional magnetic resonance imaging (fMRI to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.
Borgström, Kristina; Torkildsen, Janne von Koss; Lindgren, Magnus
In an event-related potentials (ERP) study, twenty-month-old children (n = 37) were presented with pseudowords to map to novel object referents in five presentations. Quicker attenuation of the visual Negative central component (Nc) to novel objects predicted a larger difference in N400 amplitude between congruous and incongruous presentations of pseudowords at test. Furthermore, better initial recognition of familiar objects (Nc difference between familiar and novel objects) predicted the strength of the N400 incongruity effect to the verbal labels of these real objects. This study presents novel evidence for a link between efficient visual processing of objects and word learning ability.
Full Text Available There is evidence that human-produced androstenes affect attitudinal, emotional, and physiological states in a context-dependent manner, suggesting that they could be involved in modulating social interactions. For instance, androstadienone appears to increase attention specifically to emotional information. Most of the previous work focused on one or two androstenes. Here, we tested whether androstenes affect linguistic processing, using three different androstene compounds. Participants (90 women and 77 men performed a lexical decision task after being exposed to an androstene or to a control treatment (all compounds were applied on the philtrum. We tested effects on three categories of target words, varying in emotional valence: positive, competitive, and neutral words (e.g., hope, war, and century, respectively. Results show that response times were modulated by androstene treatment and by emotional valence of words. Androstenone, but not androstadienone and androstenol, significantly slowed down the reaction time to words with competitive valence. Moreover, men exposed to androstenol showed a significantly reduced error rate, although men tended to make more errors than women in general. This suggests that these androstenes modulate the processing of emotional words, namely some particular lexical emotional content may become more salient under the effect of androstenes.
Floyd, J. S.
When a scientist begins studying a new geographic region of the Earth, they frequently begin by gathering relevant scientific literature in order to understand what is known, for example, about the region's geologic setting, structure, stratigraphy, and tectonic and environmental history. Experienced scientists typically know what keywords to seek and understand that if a document contains one important keyword, then other words in the document may be important as well. Word relationships in a document give rise to what is known in linguistics as the context-dependent nature of meaning. For example, the meaning of the word `strike' in geology, as in the strike of a fault, is quite different from its popular meaning in baseball. In addition, word order, such as in the phrase `Cretaceous-Tertiary boundary,' often corresponds to the order of sequences in time or space. The context of words and the relevance of words to each other can be derived quantitatively by machine learning vector representations of words. Here we show the results of training a neural network to create word vectors from scientific research papers from selected rift basins and mid-ocean ridges: the Woodlark Basin of Papua New Guinea, the Hess Deep rift, and the Gulf of Mexico basin. The word vectors are statistically defined by surrounding words within a given window, limited by the length of each sentence. The word vectors are analyzed by their cosine distance to related words (e.g., `axial' and `magma'), classified by high dimensional clustering, and visualized by reducing the vector dimensions and plotting the vectors on a two- or three-dimensional graph. Similarity analysis of `Triassic' and `Cretaceous' returns `Jurassic' as the nearest word vector, suggesting that the model is capable of learning the geologic time scale. Similarity analysis of `basalt' and `minerals' automatically returns mineral names such as `chlorite', `plagioclase,' and `olivine.' Word vector analysis and visualization
Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir
Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider
This paper presents visual autoethnography as a method for exploring the embodied performances of tourists' experiences. As a fusion of visual elicitation and autoethnographic encounter, visual autoethnography mobilises spaces of understanding; transcending limitations of verbal discourse and opening spaces for mutual appreciation and reflection. The paper proposes, through visual autoethnography, researcher and respondents connect through intersubjective negotiation; unpacking intricate perf...
Aparicio, Xavier; Lavaur, Jean-Marc
The present study aims to investigate how trilinguals process their two non-dominant languages and how those languages influence one another, as well as the relative importance of the dominant language on their processing. With this in mind, 24 French (L1)- English (L2)- and Spanish (L3)-unbalanced trilinguals, deemed equivalent in their L2 and L3…
Full Text Available This study set out to examine the effects of a morpheme-based training on reading and spelling in fifth and sixth graders (N = 47, who present poor literacy skills and speak German as a second language. A computerized training, consisting of a visual lexical decision task (comprising 2,880 items, presented in 12 sessions, was designed to encourage fast morphological analysis in word processing. The children were divided between two groups: the one underwent a morpheme-based training, in which word-stems of inflections and derivations were presented for a limited duration, while their pre- and suffixes remained on screen until response. Another group received a control training consisting of the same task, except that the duration of presentation of a non-morphological unit was restricted. In a Word Disruption Task, participants read words under three conditions: morphological separation (with symbols separating between the words’ morphemes, non-morphological separation (with symbols separating between non-morphological units of words, and no-separation (with symbols presented at the beginning and end of each word. The group receiving the morpheme-based program improved more than the control group in terms of word reading fluency in the morphological condition. The former group also presented similar word reading fluency after training in the morphological condition and in the no-separation condition, thereby suggesting that the morpheme-based training contributed to the integration of morphological decomposition into the process of word recognition. At the same time, both groups similarly improved in other measures of word reading fluency. With regard to spelling, the morpheme-based training group showed a larger improvement than the control group in spelling of trained items, and a unique improvement in spelling of untrained items (untrained word-stems integrated into trained pre- and suffixes. The results further suggest some contribution of
Estudo comparativo do acesso semântico no processamento visual de palavras entre brasileiros monolíngues e chineses multilíngues falantes do português do Brasil como língua estrangeira Comparative study of the semantic access in the visual processing of words between monolinguistic Brazilians and multilinguistic Chinese who speak portuguese as a foreign language
Jerusa Fumagalli de Salles
Full Text Available O priming semântico é uma forma de avaliar o processamento semântico de palavras. Se a semântica é um importante fator contribuinte no reconhecimento visual de palavras, surge a questão de se chineses multilíngues (mandarin como L1 e inglês como L2, que estão aprendendo o português como L3, podem se beneficiar do contexto semântico em tarefa de decisão lexical na Língua Portuguesa, comparado aos controles (brasileiros universitários e crianças. Além de comparar a magnitude do efeito de priming semântico entre a amostra de chineses e de brasileiros, objetivou-se investigar nos chineses a relação entre o desempenho no experimento de priming semântico e na tarefa de consciência fonológica, ambos na língua portuguesa. Participaram do estudo 40 universitários chineses multilíngues, 31 universitários brasileiros e 26 crianças de 3ª série. Houve efeito de priming semântico nos chineses e nos brasileiros, universitários e crianças, ou seja, respostas mais rápidas na condição com prime relacionado do que na condição com prime não relacionado. Não houve diferenças significativas na magnitude do efeito entre os grupos de adultos, mas as crianças apresentaram maior magnitude de efeito do que os chineses. Considerando apenas o grupo de chineses, não houve correlação entre os escores na tarefa de decisão lexical no paradigma de priming semântico e a avaliação da consciência fonológica. Chineses parecem ter acessado o significado dos primes apresentados visualmente na Língua Portuguesa, não se diferenciando dos brasileiros adultos e crianças.The semantic priming paradigm can be used to evaluate word semantic processing. Considering that semantic is an important factor in visual word recognition, an experiment was conducted to verify if multilingual Chinese (L1 and L2 being respectively Mandarin and English that are learning Portuguese as L3 would benefit from the semantic context during a lexical decision
Augustin, M Dorothee; Wagemans, Johan; Carbon, Claus-Christian
A central problem in the literature on psychological aesthetics is a lack of precision in terminology regarding the description and measurement of aesthetic impressions. The current research project approached the problem of terminology empirically, by studying people's word usage to describe aesthetic impressions. For eight different object classes that are relevant in visual aesthetics, including visual art, landscapes, faces and different design classes, we examined which words people use to describe their aesthetic impressions, and which general conceptual dimensions might underlie similarities and differences between the classes. The results show an interplay between generality and specificity in aesthetic word usage. In line with results by Jacobsen, Buchta, Kohler, and Schroger (2004)beautiful and ugly seem to be the words with most general relevance, but in addition each object class has its own distinct pattern of relevant terms. Multidimensional scaling and correspondence analysis suggest that the most extreme positions in aesthetic word usage for the classes studied are taken by landscapes and geometric shapes and patterns. This research aims to develop a language of aesthetics for the visual modality. Such a common vocabulary should facilitate the development of cross-disciplinary models of aesthetics and create a basis for the construction of standardised aesthetic measures. Copyright © 2011 Elsevier B.V. All rights reserved.
Yurovsky, Daniel; Yu, Chen; Smith, Linda B.
Cross-situational word learning, like any statistical learning problem, involves tracking the regularities in the environment. But the information that learners pick up from these regularities is dependent on their learning mechanism. This paper investigates the role of one type of mechanism in statistical word learning: competition. Competitive mechanisms would allow learners to find the signal in noisy input, and would help to explain the speed with which learners succeed in statistical learning tasks. Because cross-situational word learning provides information at multiple scales – both within and across trials/situations –learners could implement competition at either or both of these scales. A series of four experiments demonstrate that cross-situational learning involves competition at both levels of scale, and that these mechanisms interact to support rapid learning. The impact of both of these mechanisms is then considered from the perspective of a process-level understanding of cross-situational learning. PMID:23607610
Devereux, Barry J.; Clarke, Alex; Marouchos, Andreas; Tyler, Lorraine K.
Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects. PMID:24285896
Full Text Available This paper describes a technique for embedding document metadata, and potentially other semantic references inline in word processing documents, which the authors have implemented with the help of a software development team. Several assumptions underly the approach; It must be available across computing platforms and work with both Microsoft Word (because of its user base and OpenOffice.org (because of its free availability. Further the application needs to be acceptable to and usable by users, so the initial implementation covers only small number of features, which will only be extended after user-testing. Within these constraints the system provides a mechanism for encoding not only simple metadata, but for inferring hierarchical relationships between metadata elements from a ‘flat’ word processing file.The paper includes links to open source code implementing the techniques as part of a broader suite of tools for academic writing. This addresses tools and software, semantic web and data curation, integrating curation into research workflows and will provide a platform for integrating work on ontologies, vocabularies and folksonomies into word processing tools.
Ormel, Ellen; Hermans, Daan; Knoors, Harry; Hendriks, Angelique; Verhoeven, Ludo
Purpose: Phonological activation during visual word recognition was studied in deaf and hearing children under two circumstances: (a) when the use of phonology was not required for task performance and might even hinder it and (b) when the use of phonology was critical for task performance. Method: Deaf children mastering written Dutch and Sign…
Marslen-Wilson, William D.; Bozic, Mirjana; Randall, Billi
The role of morphological, semantic, and form-based factors in the early stages of visual word recognition was investigated across different SOAs in a masked priming paradigm, focusing on English derivational morphology. In a first set of experiments, stimulus pairs co-varying in morphological decomposability and in semantic and orthographic…
Pawara, Pornntiwa; Okafor, Emmanuel; Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco
The use of machine learning and computer vision methods for recognizing different plants from images has attracted lots of attention from the community. This paper aims at comparing local feature descriptors and bags of visual words with different classifiers to deep convolutional neural networks
Okafor, Emmanuel; Pawara, Pornntiwa; Karaaba, Mahir; Surinta, Olarik; Codreanu, Valeriu; Schomaker, Lambertus; Wiering, Marco
Most research in image classification has focused on applications such as face, object, scene and character recognition. This paper examines a comparative study between deep convolutional neural networks (CNNs) and bag of visual words (BOW) variants for recognizing animals. We developed two variants
Dewolf, Tinne; Van Dooren, Wim; Verschaffel, Lieven
We investigated the effect of two visual aids in representational illustrations on pupils' realistic word problem solving. In part 1 of our study, 288 elementary school pupils received an individual paper-and-pencil task with seven problematic items (P-items) in which realistic considerations need to be made to come to an appropriate reaction.…
Jasper J F van den Bosch
Full Text Available Dual-coding theory (Paivio, 1986 postulates that the human mind represents objects not just with an analogous, or semantic code, but with a perceptual representation as well. Previous studies (eg, Fiebach & Friederici, 2004 indicated that the modality of this representation is not necessarily the one that triggers the representation. The human visual cortex contains several regions, such as the Lateral Occipital Complex (LOC, that respond specifically to object stimuli. To investigate whether these principally visual representations regions are also recruited for auditory stimuli, we presented subjects with spoken words with specific, concrete meanings (‘car’ as well as words with abstract meanings (‘hope’. Their brain activity was measured with functional magnetic resonance imaging. Whole-brain contrasts showed overlap between regions differentially activated by words for concrete objects compared to words for abstract concepts with visual regions activated by a contrast of object versus non-object visual stimuli. We functionally localized LOC for individual subjects and a preliminary analysis showed a trend for a concreteness effect in this region-of-interest on the group level. Appropriate further analysis might include connectivity and classification measures. These results can shed light on the role of crossmodal representations in cognition.
Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe Allerup
multiple stimuli are presented simultaneously: Are words treated as units or wholes in visual short term memory? Using methods based on a Theory of Visual Attention (TVA), we measured perceptual threshold, visual processing speed and visual short term memory capacity for words and letters, in two simple...
Smith, Linda B; Yu, Chen
Recent evidence shows that infants can learn words and referents by aggregating ambiguous information across situations to discern the underlying word-referent mappings. Here, we use an individual difference approach to understand the role of different kinds of attentional processes in this learning: 12-and 14-month-old infants participated in a cross-situational word-referent learning task in which the learning trials were ordered to create local novelty effects, effects that should not alter the statistical evidence for the underlying correspondences. The main dependent measures were derived from frame-by-frame analyses of eye gaze direction. The fine- grained dynamics of looking behavior implicates different attentional processes that may compete with or support statistical learning. The discussion considers the role of attention in binding heard words to seen objects, individual differences in attention and vocabulary development, and the relation between macro-level theories of word learning and the micro-level dynamic processes that underlie learning.
Tucha, Oliver; Trumpp, Christian; Lange, Klaus W
It is generally assumed that the lexical and phonological systems are involved in writing to dictation. In an experiment concerned with the writing of words and non-words to dictation, the handwriting of female students was registered using a digitising tablet. The data contradict the assumption that the phonological system represents an alexical process. Both words and non-words which were acoustically presented to the subjects were lexically parsed. The analysis of kinematic data revealed significant differences between the subjects' writing of words and non-words. The findings reveal gross disturbances of handwriting fluency during the writing of non-words. The findings of the experiment cannot be explained by the dual-process-theory.
Chetail, Fabienne; Content, Alain
The processes and the cues determining the orthographic structure of polysyllabic words remain far from clear. In the present study, we investigated the role of letter category (consonant vs. vowels) in the perceptual organization of letter strings. In the syllabic counting task, participants were presented with written words matched for the…
Chaparro, Alex; Liao, Corrina
Previous research has demonstrated that the masking effects of flankers about a target in the peripheral retina are not isotropic. Rather, regions of lateral interaction are ellipsoid in shape with the major axis oriented radially along a meridian through the fovea. This finding leads to the counterintuitive prediction that horizontal text positioned to the right of fixation might be read more slowly than similarly positioned text oriented diagonally or vertically. Similarly, vertically oriented text above fixation might be read more slowly than horizontally or diagonally oriented text above fixation. We investigated the effect of text orientation and inter-character spacing on word identification in the retinal periphery. Text was presented by rapid serial visual presentation. Words were centered 3 degrees from fixation along four visual field meridians (VM) (right horizontal, upper-right diagonal, vertical, and upper-left diagonal). Regardless of VM identification, performance was best for horizontal text, declining slightly for orientations between +60 degrees and -60 degrees and declining more quickly for acute orientations. A weak effect of VM was observed for text with normal inter-character spacing. Performance was best for text centered along the horizontal meridian and declined slightly along the other VM. Finally, identification rates increased by approximately 33 words min(-1) with the addition of one character space between adjacent letters. The word-recognition processes are very tolerant of text orientation, exhibiting a modest decline for orientations within +/- 60 of horizontal regardless of VM.
Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights. © 2011 IEEE.
Mongelli, Valeria; Dehaene, Stanislas; Vinckier, Fabien; Peretz, Isabelle; Bartolomeo, Paolo; Cohen, Laurent
How does the human visual system accommodate expertise for two simultaneously acquired symbolic systems? We used fMRI to compare activations induced in the visual cortex by musical notation, written words and other classes of objects, in professional musicians and in musically naïve controls. First, irrespective of expertise, selective activations for music were posterior and lateral to activations for words in the left occipitotemporal cortex. This indicates that symbols characterized by different visual features engage distinct cortical areas. Second, musical expertise increased the volume of activations for music and led to an anterolateral displacement of word-related activations. In musicians, there was also a dramatic increase of the brain-scale networks connected to the music-selective visual areas. Those findings reveal that acquiring a double visual expertise involves an expansion of category-selective areas, the development of novel long-distance functional connectivity, and possibly some competition between categories for the colonization of cortical space. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rayner, Keith; Angele, Bernhard; Schotter, Elizabeth R; Bicknell, Klinton
Whether readers always identify words in the order they are printed is subject to considerable debate. In the present study, we used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the preview for a two-word target region (e.g. white walls in My neighbor painted the white walls black). Readers received an identical (white walls), transposed (walls white), or unrelated preview (vodka clubs). We found that there was a clear cost of having a transposed preview compared to an identical preview, indicating that readers cannot or do not identify words out of order. However, on some measures, the transposed preview condition did lead to faster processing than the unrelated preview condition, suggesting that readers may be able to obtain some useful information from a transposed preview. Implications of the results for models of eye movement control in reading are discussed.
Jay, Timothy; Caldwell-Harris, Catherine; King, Krista
People remember emotional and taboo words better than neutral words. It is well known that words that are processed at a deep (i.e., semantic) level are recalled better than words processed at a shallow (i.e., purely visual) level. To determine how depth of processing influences recall of emotional and taboo words, a levels of processing paradigm was used. Whether this effect holds for emotional and taboo words has not been previously investigated. Two experiments demonstrated that taboo and emotional words benefit less from deep processing than do neutral words. This is consistent with the proposal that memories for taboo and emotional words are a function of the arousal level they evoke, even under shallow encoding conditions. Recall was higher for taboo words, even when taboo words were cued to be recalled after neutral and emotional words. The superiority of taboo word recall is consistent with cognitive neuroscience and brain imaging research.
Full Text Available Do task demands change the way we extract information from a stimulus, or only how we use this information for decision making? In order to answer this question for visual word recognition, we used EEG/MEG as well as fMRI to determine the latency ranges and spatial areas in which brain activation to words is modulated by task demands. We presented letter strings in three tasks (lexical decision, semantic decision, silent reading, and measured combined EEG/MEG as well as fMRI responses in two separate experiments. EEG/MEG sensor statistics revealed the earliest reliable task effects at around 150 ms, which were localized, using minimum norm estimates (MNE, to left inferior temporal, right anterior temporal and left precentral gyri. Later task effects (250 ms and 480 ms occurred in left middle and inferior temporal gyri. Our fMRI data showed task effects in left inferior frontal, posterior superior temporal and precentral cortices. Although there was some correspondence between fMRI and EEG/MEG localizations, discrepancies predominated. We suggest that fMRI may be less sensitive to the early short-lived processes revealed in our EEG/MEG data. Our results indicate that task-specific processes start to penetrate word recognition already at 150 ms, suggesting that early word processing is flexible and intertwined with decision making.
Osiurak, François; Bergot, Morgane; Chainay, Hanna
For theories of embodied cognition, reading a word activates sensorimotor representations in a similar manner to seeing the physical object the word represents. Thus, reading words representing objects of different sizes interfere with motor planning, inducing changes in grip aperture. An outstanding issue is whether word reading can also evoke sensorimotor information about the weight of objects. This issue was addressed in two experiments wherein participants have first to read the name of an object (Experiment 1)/observe the object (Experiment 2) and then to transport versus use bottles of water. The objects presented as primes were either lighter or heavier than the bottles to be grasped. Results indicated that the main parameters of motor planning recorded (initiation times and finger contact points) were not affected by the presentation of words as primes (Experiment 1). By contrast, the presentation of visual objects as primes induced significant changes in these parameters (Experiment 2). Participants changed their way of grasping the bottles, particularly in the use condition. Taken together, these results suggest that the activation of concepts does not automatically evoke sensorimotor representations about the weight of objects, but visual objects do. Copyright © 2015 Elsevier B.V. All rights reserved.
Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; Zarabozo-Hurtado, Daniel; González-Garrido, Andrés A; Gudayol-Ferré, Esteve
The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group's SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages.
Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; Zarabozo-Hurtado, Daniel; González-Garrido, Andrés A.; Gudayol-Ferré, Esteve
The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages. PMID:26042070
Starrfelt, Randi; Habekost, Thomas; Gerlach, Christian
Whether pure alexia is a selective disorder that affects reading only, or if it reflects a more general visual disturbance, is highly debated. We have investigated the selectivity of visual deficits in a pure alexic patient (NN) using a combination of psychophysical measures, mathematical modelling and more standard experimental paradigms. NN's naming and categorization of line drawings were normal with regards to both errors and reaction times (RTs). Psychophysical experiments revealed that NN's recognition of single letters at fixation was clearly impaired, and recognition of single digits was also affected. His visual apprehension span was markedly reduced for letters and digits. His reduced visual processing capacity was also evident when reporting letters from words. In an object decision task with fragmented pictures, NN's performance was abnormal. Thus, even in a pure alexic patient with intact recognition of line drawings, we find evidence of a general visual deficit not selective to letters or words. This finding is important because it raises the possibility that other pure alexics might have similar non-selective impairments when tested thoroughly. We argue that the general visual deficit in NN can be accounted for in terms of inefficient build-up of sensory representations, and that this low level deficit can explain the pattern of spared and impaired abilities in this patient. Copyright 2009 Elsevier Srl. All rights reserved.
Baumgaertner, Annette; Hartwigsen, Gesa; Roman Siebner, Hartwig
Verbal stimuli often induce right-hemispheric activation in patients with aphasia after left-hemispheric stroke. This right-hemispheric activation is commonly attributed to functional reorganization within the language system. Yet previous evidence suggests that functional activation in right......-hemispheric homologues of classic left-hemispheric language areas may partly be due to processing nonlinguistic perceptual features of verbal stimuli. We used functional MRI (fMRI) to clarify the role of the right hemisphere in the perception of nonlinguistic word features in healthy individuals. Participants made......, in some instances, be driven by a "nonlinguistic perceptual processing" mode that focuses on nonlinguistic word features. This raises the possibility that stronger activation of right inferior frontal areas during language tasks in aphasic patients with left-hemispheric stroke may at least partially...
Anderson, Charles H.; Van Essen, David C.
Report reviews and analyzes information-processing strategies and pathways in primate retina and visual cortex. Of interest both in biological fields and in such related computational fields as artificial neural networks. Focuses on data from macaque, which has superb visual system similar to that of humans. Authors stress concept of "good engineering" in understanding visual system.
van der Schaaf, Arjen
The visual system of a human or animal that functions in its natural environment receives huge amounts of visual information. This information is vital for the survival of the organism. In this thesis I follow the hypothesis that evolution has optimised the biological visual system to process the
Paul Hockings’ Principles of Visual Anthropology opened with Margaret Mead’s article ‘Visual Anthropology in a Discipline of Words’. In her prefatory lines Mead lamented that too many research projects “insist on continuing the hopelessly inadequate note-taking of an earlier age.” Today, some forty years after the first publication of Mead’s text, the opposition of the verbal and the visual still seems to loom over the full acceptance of the visual in cultural anthropology.
Victor, Jonathan D; Conte, Mary M; Chubb, Charles F
Visual textures are a class of stimuli with properties that make them well suited for addressing general questions about visual function at the levels of behavior and neural mechanism. They have structure across multiple spatial scales, they put the focus on the inferential nature of visual processing, and they help bridge the gap between stimuli that are analytically convenient and the complex, naturalistic stimuli that have the greatest biological relevance. Key questions that are well suited for analysis via visual textures include the nature and structure of perceptual spaces, modulation of early visual processing by task, and the transformation of sensory stimuli into patterns of population activity that are relevant to perception.
Taha, Haitham; Khateb, Asaid
The Arabic alphabetical orthographic system has various unique features that include the existence of emphatic phonemic letters. These represent several pairs of letters that share a phonological similarity and use the same parts of the articulation system. The phonological and articulatory similarities between these letters lead to spelling errors where the subject tends to produce a pseudohomophone (PHw) instead of the correct word. Here, we investigated whether or not the unique orthographic features of the written Arabic words modulate early orthographic processes. For this purpose, we analyzed event-related potentials (ERPs) collected from adult skilled readers during an orthographic decision task on real words and their corresponding PHw. The subjects' reaction times (RTs) were faster in words than in PHw. ERPs analysis revealed significant response differences between words and the PHw starting during the N170 and extending to the P2 component, with no difference during processing steps devoted to phonological and lexico-semantic processing. Amplitude and latency differences were found also during the P6 component which peaked earlier for words and where source localization indicated the involvement of the classical left language areas. Our findings replicate some of the previous findings on PHw processing and extend them to involve early orthographical processes. PMID:24348367
Vasu, Ellen Storey; Howe, Ann C.
This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.
Havy, Mélanie; Bertoncini, Josiane; Nazzi, Thierry
Consonants and vowels have been shown to play different relative roles in different processes, including retrieving known words from pseudowords during adulthood or simultaneously learning two phonetically similar pseudowords during infancy or toddlerhood. The current study explores the extent to which French-speaking 3- to 5-year-olds exhibit a so-called "consonant bias" in a task simulating word acquisition, that is, when learning new words for unfamiliar objects. In Experiment 1, the to-be-learned words differed both by a consonant and a vowel (e.g., /byf/-/duf/), and children needed to choose which of the two objects to associate with a third one whose name differed from both objects by either a consonant or a vowel (e.g., /dyf/). In such a conflict condition, children needed to favor (or neglect) either consonant information or vowel information. The results show that only 3-year-olds preferentially chose the consonant identity, thereby neglecting the vowel change. The older children (and adults) did not exhibit any response bias. In Experiment 2, children needed to pick up one of two objects whose names differed on either consonant information or vowel information. Whereas 3-year-olds performed better with pairs of pseudowords contrasting on consonants, the pattern of asymmetry was reversed in 4-year-olds, and 5-year-olds did not exhibit any significant response bias. Interestingly, girls showed overall better performance and exhibited earlier changes in performance than boys. The changes in consonant/vowel asymmetry in preschoolers are discussed in relation with developments in linguistic (lexical and morphosyntactic) and cognitive processing. Copyright © 2010 Elsevier Inc. All rights reserved.
Sirinukunwattana, Korsuk; Khan, Adnan M; Rajpoot, Nasir M
Detection and classification of cells in histological images is a challenging task because of the large intra-class variation in the visual appearance of various types of biological cells. In this paper, we propose a discriminative dictionary learning paradigm, termed as Cell Words, for modelling the visual appearance of cells which includes colour, shape, texture and context in a unified manner. The proposed framework is capable of distinguishing mitotic cells from non-mitotic cells (apoptotic, necrotic, epithelial) in breast histology images with high accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Colé, Pascale; Pynte, Joël; Andriamamonjy, Pascale
Lexical decision times and eye movements were recorded to determine whether grammatical gender can influence the visual recognition of isolated French nouns. This issue was investigated by assessing the use of two types of regularities between a noun's form and its gender--namely ending-to-gender regularities (e.g., the final letter sequence -at appears only in masculine nouns and, thus, is predictive of masculine gender) and gender-to-ending regularities (e.g., feminine gender would predict the final letter e, whereas masculine gender would not). Previous studies have shown that noun endings are used by readers when they have to identify gender. However, the influence of ending-to-gender predictiveness has never been investigated in a lexical decision task, and the effect of gender-to-ending regularities has never been evaluated at all. The results suggest that gender information can influence both the activation stage (Experiments 1 and 3) and the selection stage (Experiments 2 and 3) of the word recognition process.
Full Text Available Abstract This study examines strategies (inferencing and ignoring and knowledge sources (semantics, morphology, paralinguistics, etc. that second language learners of English use to process unfamiliar words in listening comprehension and whether the use of strategies or knowledge sources relates to successful text comprehension or word comprehension. Data were collected using the procedures of immediate retrospection without recall support and of stimulated recall. Twenty participants with Chinese as their first language participated in the procedures. Both qualitative and quantitative analyses were made. The results indicate that inferencing is the primary strategy that learners use to process unfamiliar words in listening and that it relates to successful text comprehension. Among the different knowledge sources that learners use, the most frequently used knowledge sources are semantic knowledge of words in the local co-text combined with background knowledge and semantic knowledge of the overall co-text. The finding that the use of most knowledge sources does not relate to the comprehension of the word suggests that no particular knowledge source is universally effective or ineffective and that what is crucial is to use the various knowledge sources flexibly. Résumé Cette étude examine les stratégies (la déduction et l'omission de mots et les sources de connaissances (sémantique, morphologie, connaissance antérieure, etc. utilisées par les étudiants d’anglais langue seconde (ALS pour comprendre les mots inconnus à l'oral, et s'interroge sur les liens entre l’emploi des stratégies ou sources de connaissances et la bonne compréhension des textes et des mots. Les données ont été recueillies immédiatement après observation, sans rappel ni simulation ultérieure. Vingt locuteurs de langue maternelle chinoise ont participé à l’étude. Des approches qualitative et quantitative ont été utilisées. Les résultats indiquent
Full Text Available RESUMO: Este trabalho apresenta algumas contribuições de Bakhtin e do Círculo para a definição e leitura do verbo-visual, situando-as no conceito de palavra. Para tanto, escolhe mandioca, recuperada em três momentos: a passagem do oral para o escrito, em texto de Couto de Magalhães, datado de 1876; a forma francesa verbo-visual, em texto publicado na França em 1923; em livro contemporâneo de receitas de cozinha, cuja primeira edição é de 2005 e a segunda de 2006. ABSTRACT:This work presents some of Bakhtin and the Circle’s contributions tothe definition and reading of the verbal visual language, placing these contributions on the concept of the word. For this purpose, the word manioc was chosen, recovered in three moments: the transition from oral to writing in Couto de Magalhães’ text, dated of 1876; the text’s French version published in France in 1923; and in a contemporary recipe book first released in 2005 and with a second edition in 2006.
Technical Report KEN NAKAYAMA Progress for period October 1. 1991 - September 30. 1992 Work has proceeded in a number of distinct areas : 1. In...problem, showing that unpaired points are not subject to obligatory Panum matching in unpaired zones. 5. In collaboration with Dr. Preeti Verghese we are...Nakayama, K. Visual search: different trade-offs for discriminability, number, and time. Plans for the future: We plan to continue work in all of the areas
Pavlova A. A.
Full Text Available Background. Previous studies have shown that brain response to a written word depends on the task: whether the word is a target in a version of lexical decision task or should be read silently. Although this effect has been interpreted as an evidence for an interaction between word recognition processes and task demands, it also may be caused by greater attention allocation to the target word. Objective. We aimed to examine the task effect on brain response evoked by non- target written words. Design. Using MEG and magnetic source imaging, we compared spatial-temporal pattern of brain response elicited by a noun cue when it was read silently either without additional task (SR or with a requirement to produce an associated verb (VG. Results.The task demands penetrated into early (200-300 ms and late (500-800 ms stages of a word processing by enhancing brain response under VG versus SR condition. The cortical sources of the early response were localized to bilateral inferior occipitotemporal and anterior temporal cortex suggesting that more demanding VG task required elaborated lexical-semantic analysis. The late effect was observed in the associative auditory areas in middle and superior temporal gyri and in motor representation of articulators. Our results suggest that a remote goal plays a pivotal role in enhanced recruitment of cortical structures underlying orthographic, semantic and sensorimotor dimensions of written word perception from the early processing stages. Surprisingly, we found that to fulfil a more challenging goal the brain progressively engaged resources of the right hemisphere throughout all stages of silent reading. Conclusion. Our study demonstrates that a deeper processing of linguistic input amplifies activation of brain areas involved in integration of speech perception and production. This is consistent with theories that emphasize the role of sensorimotor integration in speech understanding.
Kristensen, Line Burholt; Engberg-Pedersen, Elisabeth; Wallentin, Mikkel
The function of the left inferior frontal gyrus (L-IFG) is highly disputed. A number of language processing studies have linked the region to the processing of syntactical structure. Still, there is little agreement when it comes to defining why linguistic structures differ in their effects on the L-IFG. In a number of languages, the processing of object-initial sentences affects the L-IFG more than the processing of subject-initial ones, but frequency and distribution differences may act as confounding variables. Syntactically complex structures (like the object-initial construction in Danish) are often less frequent and only viable in certain contexts. With this confound in mind, the L-IFG activation may be sensitive to other variables than a syntax manipulation on its own. The present fMRI study investigates the effect of a pragmatically appropriate context on the processing of subject-initial and object-initial clauses with the IFG as our ROI. We find that Danish object-initial clauses yield a higher BOLD response in L-IFG, but we also find an interaction between appropriateness of context and word order. This interaction overlaps with traditional syntax areas in the IFG. For object-initial clauses, the effect of an appropriate context is bigger than for subject-initial clauses. This result is supported by an acceptability study that shows that, given appropriate contexts, object-initial clauses are considered more appropriate than subject-initial clauses. The increased L-IFG activation for processing object-initial clauses without a supportive context may be interpreted as reflecting either reinterpretation or the recipients' failure to correctly predict word order from contextual cues.
Hadar, Britt; Skrzypek, Joshua E; Wingfield, Arthur; Ben-David, Boaz M
In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the "visual world" eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., "point at the candle"). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions.
Full Text Available In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the ‘visual world’ eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g. point at the candle. Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset or at the last syllable (offset. Eye movements captured listeners’ ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal. We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load or four (high-load spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions.
Calvo, Manuel G.; Lang, Peter J.
The authors investigated whether emotional pictorial stimuli are especially likely to be processed in parafoveal vision. Pairs of emotional and neutral visual scenes were presented parafoveally (2.1[degrees] or 2.5[degrees] of visual angle from a central fixation point) for 150-3,000 ms, followed by an immediate recognition test (500-ms delay).…
Boumaraf, Assia; Macoir, Joël
Deep dyslexia is a written language disorder characterized by poor reading of non-words, and advantage for concrete over abstract words with production of semantic, visual and morphological errors. In this single case study of an Arabic patient with input deep dyslexia, we investigated the impact of graphic features of Arabic on manifestations of…
Ziegler, Johannes C; Ferrand, Ludovic; Montant, Marie
In this study, we investigated orthographic influences on spoken word recognition. The degree of spelling inconsistency was manipulated while rime phonology was held constant. Inconsistent words with subdominant spellings were processed more slowly than inconsistent words with dominant spellings. This graded consistency effect was obtained in three experiments. However, the effect was strongest in lexical decision, intermediate in rime detection, and weakest in auditory naming. We conclude that (1) orthographic consistency effects are not artifacts of phonological, phonetic, or phonotactic properties of the stimulus material; (2) orthographic effects can be found even when the error rate is extremely low, which rules out the possibility that they result from strategies used to reduce task difficulty; and (3) orthographic effects are not restricted to lexical decision. However, they are stronger in lexical decision than in other tasks. Overall, the study shows that learning about orthography alters the way we process spoken language.
Wagner, Katie; Dobkins, Karen; Barner, David
Most current accounts of color word acquisition propose that the delay between children's first production of color words and adult-like understanding is due to problems abstracting color as a domain of meaning. Here we present evidence against this hypothesis, and show that, from the time children produce color words in a labeling task they use…
Rousseaux, M; Debrock, D; Cabaret, M; Steinling, M
A 15 year old ambidextrous patient presented with left temporoparietal lesions after head trauma. Seizures associated with visual hallucinations of written words arose six months later. Electroencephalography showed spike and wave complexes with phase opposition over the left parietal area. On MRI a post-traumatic porencephalic lesion was seen in area 7 and the superior part of area 39 of Brodmann; on T2 sequences, it was surrounded by a hyperecho predominating in the inferior part of the parietal lobe and extending in the posteroexternal temporal cortex. This first description of hallucinations of written words raises the possibility of the presence, in the temporoparietal cortex, of specific representations ("lexicon") of corresponding information. Images PMID:7931396
Park, Deokgun; Kim, Seungyeon; Lee, Jurim; Choo, Jaegul; Diakopoulos, Nicholas; Elmqvist, Niklas
Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.
Simola, Jaana; Holmqvist, Kenneth; Lindgren, Magnus
Readers acquire information outside the current eye fixation. Previous research indicates that having only the fixated word available slows reading, but when the next word is visible, reading is almost as fast as when the whole line is seen. Parafoveal-on-foveal effects are interpreted to reflect that the characteristics of a parafoveal word can influence fixation on a current word. Prior studies also show that words presented to the right visual field (RVF) are processed faster and more accurately than words in the left visual field (LVF). This asymmetry results either from an attentional bias, reading direction, or the cerebral asymmetry of language processing. We used eye-fixation-related potentials (EFRP), a technique that combines eye-tracking and electroencephalography, to investigate visual field differences in parafoveal-on-foveal effects. After a central fixation, a prime word appeared in the middle of the screen together with a parafoveal target that was presented either to the LVF or to the RVF. Both hemifield presentations included three semantic conditions: the words were either semantically associated, non-associated, or the target was a non-word. The participants began reading from the prime and then made a saccade towards the target, subsequently they judged the semantic association. Between 200 and 280ms from the fixation onset, an occipital P2 EFRP-component differentiated between parafoveal word and non-word stimuli when the parafoveal word appeared in the RVF. The results suggest that the extraction of parafoveal information is affected by attention, which is oriented as a function of reading direction.
to analyse the received information, illustrated by the fact that one third of the human brain is devoted to visual information processing. The cost of maintaining such neural network deter most organisms from investing in the camera type option, if possible, and settle for a model that will more precisely...... 1000 neurons, which make these stunning animals the perfect model organism to explore basic visual information processing.......Eyes have been considered support for the divine design hypothesis over evolution because, surely, eyes cannot function with anything less than all the components that comprise a vertebrate camera type eye. Yet, devoted Darwinists have estimated that complex visual systems can evolve from a single...
Full Text Available Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet The paper offers a critical evaluation of the power and usefulness of an automatic prompt system based on the extended Relaxation Labelling algorithm in the process of (manual mapping plWordNet on Princeton WordNet. To this end the results of manual mapping – that is inter-lingual relations between plWN and PWN synsets – are juxtaposed with the automatic prompts that were generated for the source language synsets to be mapped. We check the number and type of inter-lingual relations introduced on the basis of automatic prompts and the distance of the respective prompt synsets from the actual target language synsets.
Fery, Patrick; Morais, Jose
We report a new case of visual associative agnosia. Our patient (DJ) was impaired in several tasks assessing visual processing of real objects, colour pictures, and line drawings. The deficit was present both with naming and gesturing responses. Object processing in other modalities (verbal, auditory nonverbal, and tactile) was intact. Semantic processing was impaired in the visual but not in the verbal modality. Picture-word matching was better than single picture identification. DJ's visual perceptual processing, was intact in several tasks such as visual attributes discrimination, shape discrimination, illusory contours perception, segmentation, embedded figures processing and matching objects under different viewpoints. Most importantly, we show that there was no impairment of stored structural descriptions and that the patient was able to build new visual representations. These results are considered in the context of Farah's (1990, 1991) proposals about visual associative agnosia.
Full Text Available Reading is a highly complex, flexible and sophisticated cognitive activity, and word recognition constitutes only a small and limited part of the whole process. It seems however that for various reasons, word recognition is worth studying separately from other components. Considering that writing systems are secondary codes representing the language, word recognition mechanisms may appear as an interface between printed material and general language capabilities, and thus, specific difficulties in reading and spelling acquisition should be iodated at the level of isolated word identification (see e. g. Crowder, 1982 for discussion. Moreover, it appears that a prominent characteristic of poor readers is their lack of efficiency in the processing of isolated words (Mitche11,1982; Stanovich, 1982. And finally, word recognition seems to be a more automatic and less controlled component of the whole reading process. Reading is a highly complex, flexible and sophisticated cognitive activity, and word recognition constitutes only a small and limited part of the whole process. It seems however that for various reasons, word recognition is worth studying separately from other components. Considering that writing systems are secondary codes representing the language, word recognition mechanisms may appear as an interface between printed material and general language capabilities, and thus, specific difficulties in reading and spelling acquisition should be iodated at the level of isolated word identification (see e. g. Crowder, 1982 for discussion. Moreover, it appears that a prominent characteristic of poor readers is their lack of efficiency in the processing of isolated words (Mitche11,1982; Stanovich, 1982. And finally, word recognition seems to be a more automatic and less controlled component of the whole reading process.
Full Text Available With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories.
Hao, Ming C.; Keim, Daniel A.; Dayal, Umeshwar; Schneidewind, Jörn
Business operations involve many factors and relationships and are modeled as complex business process workflows. The execution of these business processes generates vast volumes of complex data. The operational data are instances of the process flow, taking different paths through the process. The goal is to use the complex information to analyze and improve operations and to optimize the process flow. In this paper, we introduce a new visualization technique, called VisImpact that turns raw...
Recent work on the acquisition of number words has emphasized the importance of integrating linguistic and developmental perspectives [Musolino, J. (2004). The semantics and acquisition of number words: Integrating linguistic and developmental perspectives. Cognition93, 1-41; Papafragou, A., Musolino, J. (2003). Scalar implicatures: Scalar implicatures: Experiments at the semantics-pragmatics interface. Cognition, 86, 253-282; Hurewitz, F., Papafragou, A., Gleitman, L., Gelman, R. (2006). Asymmetries in the acquisition of numbers and quantifiers. Language Learning and Development, 2, 76-97; Huang, Y. T., Snedeker, J., Spelke, L. (submitted for publication). What exactly do numbers mean?]. Specifically, these studies have shown that data from experimental investigations of child language can be used to illuminate core theoretical issues in the semantic and pragmatic analysis of number terms. In this article, I extend this approach to the logico-syntactic properties of number words, focusing on the way numerals interact with each other (e.g. Three boys are holding two balloons) as well as with other quantified expressions (e.g. Three boys are holding each balloon). On the basis of their intuitions, linguists have claimed that such sentences give rise to at least four different interpretations, reflecting the complexity of the linguistic structure and syntactic operations involved. Using psycholinguistic experimentation with preschoolers (n=32) and adult speakers of English (n=32), I show that (a) for adults, the intuitions of linguists can be verified experimentally, (b) by the age of 5, children have knowledge of the core aspects of the logical syntax of number words, (c) in spite of this knowledge, children nevertheless differ from adults in systematic ways, (d) the differences observed between children and adults can be accounted for on the basis of an independently motivated, linguistically-based processing model [Geurts, B. (2003). Quantifying kids. Language
De Marco, Doriana; De Stefani, Elisa; Gentilucci, Maurizio
The present study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. Experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100 ms and integrated with the target word within 250 ms
Jia, Xiaoyu; Li, Ping; Li, Xinyu; Zhang, Yuchi; Cao, Wei; Cao, Liren; Li, Weijian
Previous research has shown that word frequency affects judgments of learning (JOLs). Specifically, people give higher JOLs for high-frequency (HF) words than for low-frequency (LF) words. However, the exact mechanism underlying this effect is largely unknown. The present study replicated and extended previous work by exploring the contributions of processing fluency and beliefs to the word frequency effect. In Experiment 1, participants studied HF and LF words and made immediate JOLs. The findings showed that participants gave higher JOLs for HF words than for LF ones, reflecting the word frequency effect. In Experiment 2a (measuring the encoding fluency by using self-paced study time) and Experiment 2b (disrupting perceptual fluency by presenting words in an easy or difficult font style), we evaluated the contribution of processing fluency. The findings of Experiment 2a revealed no significant difference in self-paced study time between HF and LF words. The findings of Experiment 2b showed that the size of word frequency effect did not decrease or disappear even when presenting words in a difficult font style. In Experiment 3a (a questionnaire-based study) and Experiment 3b (making pre-study JOLs), we evaluated the role of beliefs in this word frequency effect. The results of Experiment 3a showed that participants gave higher estimates for HF as compared to LF words. That is, they estimated that hypothetical participants would better remember the HF words. The results of Experiment 3b showed that participants gave higher pre-study JOLs for HF than for LF words. These results across experiments suggested that people's beliefs, not processing fluency, contribute substantially to the word frequency effect on JOLs. However, considering the validation of the indexes reflecting the processing fluency in the current study, we cannot entirely rule out the possible contribution of processing fluency. The relative contribution of processing fluency and beliefs to word
Lei, Yi; Dou, Haoran; Liu, Qingming; Zhang, Wenhai; Zhang, Zhonglu; Li, Hong
It has been long debated to what extent emotional words can be processed in the absence of awareness. Behavioral studies have shown that the meaning of emotional words can be accessed even without any awareness. However, functional magnetic resonance imaging studies have revealed that emotional words that are unconsciously presented do not activate the brain regions involved in semantic or emotional processing. To clarify this point, we used continuous flash suppression (CFS) and event-related potential (ERP) techniques to distinguish between semantic and emotional processing. In CFS, we successively flashed some Mondrian-style images into one participant's eye steadily, which suppressed the images projected to the other eye. Negative, neutral, and scrambled words were presented to 16 healthy participants for 500 ms. Whenever the participants saw the stimuli-in both visible and invisible conditions-they pressed specific keyboard buttons. Behavioral data revealed that there was no difference in reaction time to negative words and to neutral words in the invisible condition, although negative words were processed faster than neutral words in the visible condition. The ERP results showed that negative words elicited a larger P2 amplitude in the invisible condition than in the visible condition. The P2 component was enhanced for the neutral words compared with the scrambled words in the visible condition; however, the scrambled words elicited larger P2 amplitudes than the neutral words in the invisible condition. These results suggest that the emotional processing of words is more sensitive than semantic processing in the conscious condition. Semantic processing was found to be attenuated in the absence of awareness. Our findings indicate that P2 plays an important role in the unconscious processing of emotional words, which highlights the fact that emotional processing may be automatic and prioritized compared with semantic processing in the absence of awareness.
Full Text Available It has been long debated to what extent emotional words can be processed in the absence of awareness. Behavioral studies have shown that the meaning of emotional words can be accessed even without any awareness. However, functional magnetic resonance imaging studies have revealed that emotional words that are unconsciously presented do not activate the brain regions involved in semantic or emotional processing. To clarify this point, we used continuous flash suppression (CFS and event-related potential (ERP techniques to distinguish between semantic and emotional processing. In CFS, we successively flashed some Mondrian-style images into one participant's eye steadily, which suppressed the images projected to the other eye. Negative, neutral, and scrambled words were presented to 16 healthy participants for 500 ms. Whenever the participants saw the stimuli—in both visible and invisible conditions—they pressed specific keyboard buttons. Behavioral data revealed that there was no difference in reaction time to negative words and to neutral words in the invisible condition, although negative words were processed faster than neutral words in the visible condition. The ERP results showed that negative words elicited a larger P2 amplitude in the invisible condition than in the visible condition. The P2 component was enhanced for the neutral words compared with the scrambled words in the visible condition; however, the scrambled words elicited larger P2 amplitudes than the neutral words in the invisible condition. These results suggest that the emotional processing of words is more sensitive than semantic processing in the conscious condition. Semantic processing was found to be attenuated in the absence of awareness. Our findings indicate that P2 plays an important role in the unconscious processing of emotional words, which highlights the fact that emotional processing may be automatic and prioritized compared with semantic processing in the
Applebury, M L
Rhodopsin is one of those rare macromolecules whose inherent chromophore, 11-cis retinaldehyde, allows one to naturally observe triggered macromolecular changes on the timescale of picoseconds to minutes. Investigations of these molecular processes have been carried out with laser monochromatic light under conditions where the photon flux used for photolysis was carefully measured. The formation of bleaching intermediates has been examined as a function of fluence. Under conditions where the formation of intermediates is unaffected by photon reversal the following observations hold: Upon the absorption of a photon, the initial photochemical event results in production of metastable bathorhodopsin within 6 psec. Artificial rhodopsin regenerated with 9-cis retinal forms a distinct bathorhodopsin which must reflect distortions at the active site differing from those generated with 11-cis retinal. Bathorhodopsin thermally decays through lumirhodopsin and meta I-rhodopsin, to meta II-rhodopsin through a series of coupled equilibria. The final meta I-meta II equilibrium is stable for seconds. The process provides a unique model for utilization of energy to drive (trigger) a biological cascade of events.
Presents some negative aspects of society's dependence on digital transformation of words by referring to works by Walter Ong and Martin Heidegger. Discusses orality, literacy and digital literacy and describes three aspects of the digital transformation of words. Compares/contrasts art with technology and discusses implications for education.…
Yurovsky, Daniel; Yu, Chen; Smith, Linda B.
Cross-situational word learning, like any statistical learning problem, involves tracking the regularities in the environment. However, the information that learners pick up from these regularities is dependent on their learning mechanism. This article investigates the role of one type of mechanism in statistical word learning: competition.…
Gordon, Chelsea L; Spivey, Michael J; Balasubramaniam, Ramesh
A number of studies have suggested that perception of actions is accompanied by motor simulation of those actions. To further explore this proposal, we applied Transcranial magnetic stimulation (TMS) to the left primary motor cortex during the observation of handwritten and typed language stimuli, including words and non-word consonant clusters. We recorded motor-evoked potentials (MEPs) from the right first dorsal interosseous (FDI) muscle to measure cortico-spinal excitability during written text perception. We observed a facilitation in MEPs for handwritten stimuli, regardless of whether the stimuli were words or non-words, suggesting potential motor simulation during observation. We did not observe a similar facilitation for the typed stimuli, suggesting that motor simulation was not occurring during observation of typed text. By demonstrating potential simulation of written language text during observation, these findings add to a growing literature suggesting that the motor system plays a strong role in the perception of written language. Copyright © 2017 Elsevier B.V. All rights reserved.
Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F.
Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these…
Kunchulia, Marina; Pilz, Karin S; Herzog, Michael H
Alcohol affects vision. However, the influence of alcohol on visual processing is largely unknown. Here, we investigated the effects of alcohol on visual spatiotemporal processing. We employed a visual paradigm, the shine through backward masking paradigm, in which a vernier is either presented alone or followed by a variety of mask. We investigated performance for women at blood alcohol levels of 0mg/kg, 400mg/kg and 600 mg/kg and for men at 0mg/kg, 400mg/kg and 800 mg/kg. When the vernier was presented alone, vernier offset discrimination was not affected by alcohol. When the vernier was followed by a mask, stimulus onset asynchronies (SOAs) between target and mask were significantly longer after alcohol intake. However, as a second experiment showed, spatial and temporal processing per se were not impaired by alcohol. In addition, spatial processing was not affected by moderate alcohol consumption. Hence, moderate consumption of alcohol does not affect visual processing per se. We propose that the longer SOAs after alcohol intake are related to changes in mechanisms of target stabilization rather than changes in spatial and temporal sensitivity as has been previously suggested. Copyright © 2012 Elsevier Ltd. All rights reserved.
Full Text Available Grounded cognition theories suggest that conceptual representations essentially depend on modality-specific sensory and motor systems. Feature-specific brain activation across different feature types such as action or audition has been intensively investigated in nouns, while feature-specific conceptual category differences in verbs mainly focused on body part specific effects. The present work aimed at assessing whether feature-specific event-related potential (ERP differences between action and sound concepts, as previously observed in nouns, can also be found within the word class of verbs. In Experiment 1, participants were visually presented with carefully matched sound and action verbs within a lexical decision task, which provides implicit access to word meaning and minimizes strategic access to semantic word features. Experiment 2 tested whether pre-activating the verb concept in a context phase, in which the verb is presented with a related context noun, modulates subsequent feature-specific action vs. sound verb processing within the lexical decision task. In Experiment 1, ERP analyses revealed a differential ERP polarity pattern for action and sound verbs at parietal and central electrodes similar to previous results in nouns. Pre-activation of the meaning of verbs in the preceding context phase in Experiment 2 resulted in a polarity-reversal of feature-specific ERP effects in the lexical decision task compared with Experiment 1. This parallels analogous earlier findings for primed action and sound related nouns. In line with grounded cognitions theories, our ERP study provides evidence for a differential processing of action and sound verbs similar to earlier observation for concrete nouns. Although the localizational value of ERPs must be viewed with caution, our results indicate that the meaning of verbs is linked to different neural circuits depending on conceptual feature relevance.
Wang, Jim Jing-Yan
In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.
Amrani, Moussa; Chaib, Souleyman; Omara, Ibrahim; Jiang, Feng
Feature extraction plays a key role in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very crucial to choose appropriate features to train a classifier, which is prerequisite. Inspired by the great success of Bag-of-Visual-Words (BoVW), we address the problem of feature extraction by proposing a novel feature extraction method for SAR target classification. First, Gabor based features are adopted to extract features from the training SAR images. Second, a discriminative codebook is generated using K-means clustering algorithm. Third, after feature encoding by computing the closest Euclidian distance, the targets are represented by new robust bag of features. Finally, for target classification, support vector machine (SVM) is used as a baseline classifier. Experiments on Moving and Stationary Target Acquisition and Recognition (MSTAR) public release dataset are conducted, and the classification accuracy and time complexity results demonstrate that the proposed method outperforms the state-of-the-art methods.
Presents information on the visualization and processing of tensor fields. This book serves as an overview for the inquiring scientist, as a basic foundation for developers and practitioners, and as a textbook for specialized classes and seminars for graduate and doctoral students.
Keetels, Mirjam; Vroomen, Jean
The authors examined the effects of a task-irrelevant sound on visual processing. Participants were presented with revolving clocks at or around central fixation and reported the hand position of a target clock at the time an exogenous cue (1 clock turning red) or an endogenous cue (a line pointing toward 1 of the clocks) was presented. A…
Simon, Grégory; Bernard, Christian; Largy, Pierre; Lalonde, Robert; Rebai, Mohamed
In order to investigate the neuroanatomical chronometry of word processing, two experiments using: Event-Related Potentials (ERPs) have been performed. The first one was designed to test the effects of orthographic, phonologic, and lexical properties of linguistic items on the pre-semantic components of ERPs during a passive reading task and massive repetition used to reduce familiarity effect between words and nonwords. In a second study, the level of familiarity was investigated by varying stimulus repetition and frequency in a lexical decision task. Overall results suggest a functional discrimination between orthographic and nonorthographic stimuli begun as early as 170 ms (N170 component) whereas the next components (N230 and N320) were sensitive to the orthographic nature of the stimuli, but also to their lexical/phonologic proprieties. The N320 associated to phonological processing (Bentin et al., 1999) was modulated by word frequency and massive repetition caused its disappearance. This suggests that this component may reflect a nonobligatory phonologic stage of grapheme-phoneme conversion postulated by the DRC model (Coltheart et al., 2001) or semantic phonologically mediated pathway (Harm & Seidenberg, in press).
Beyersmann, Elisabeth; Castles, Anne; Coltheart, Max
The present experiments were designed to explore the theory of early morpho-orthographic segmentation (Rastle, Davis, & New, Psychonomic Bulletin & Review 11,1090-1098, 2004), which postulates that written words with a true morphologically complex structure (cleaner) and those with a morphological pseudostructure (corner) are both decomposed into affix and stem morphemes. We used masked complex transposed-letter (TL) nonword primes in a lexical decision task. Experiment 1 replicated the well-known masked TL-priming effect using monomorphemic nonword primes (e.g., wran-WARN). Experiment 2 used the same nonword TL stems as in Experiment 1, but combined them with real suffixes (e.g., ish as in wranish-WARN). Priming was compared with that from nonsuffixed primes in which the real suffixes were replaced with nonmorphemic endings (e.g., el as in wranel-WARN). Significant priming was found in the suffixed but not in the nonsuffixed condition, suggesting that affix-stripping occurs at prelexical stages in visual word recognition and operates over early letter-position encoding mechanisms.
Heather Raye Dial
Replicating previous studies, performance on the two word recognition tasks without closely matched distractors (WAB and PWM was at ceiling for some subjects with impairments on consonant discrimination (see Figures 1a/1b. However, as shown in Figures 1c/1d, for word processing tasks matched in phonological discriminability to the consonant discrimination task, scores on consonant discrimination and word processing were highly correlated, and no individual demonstrated substantially better performance on word than phoneme perception. One patient demonstrated worse performance on lexical decision (d’ = .21 than phoneme perception (d’ = 1.72, which can be attributed to impaired lexical or semantic processing. These data argue against the hypothesis that phoneme and word perception rely on different perceptual processes/routes for processing, and instead indicate that word perception depends on perception of sublexical units.
Jung, JeYoung; Kim, Sunmi; Cho, Hyesuk; Nam, Kichun
This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally.
Georgiou, George; Liu, Cuina; Xu, Shiyang
Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.
Fornaciai, Michele; Brannon, Elizabeth M; Woldorff, Marty G; Park, Joonkoo
While parietal cortex is thought to be critical for representing numerical magnitudes, we recently reported an event-related potential (ERP) study demonstrating selective neural sensitivity to numerosity over midline occipital sites very early in the time course, suggesting the involvement of early visual cortex in numerosity processing. However, which specific brain area underlies such early activation is not known. Here, we tested whether numerosity-sensitive neural signatures arise specifically from the initial stages of visual cortex, aiming to localize the generator of these signals by taking advantage of the distinctive folding pattern of early occipital cortices around the calcarine sulcus, which predicts an inversion of polarity of ERPs arising from these areas when stimuli are presented in the upper versus lower visual field. Dot arrays, including 8-32dots constructed systematically across various numerical and non-numerical visual attributes, were presented randomly in either the upper or lower visual hemifields. Our results show that neural responses at about 90ms post-stimulus were robustly sensitive to numerosity. Moreover, the peculiar pattern of polarity inversion of numerosity-sensitive activity at this stage suggested its generation primarily in V2 and V3. In contrast, numerosity-sensitive ERP activity at occipito-parietal channels later in the time course (210-230ms) did not show polarity inversion, indicating a subsequent processing stage in the dorsal stream. Overall, these results demonstrate that numerosity processing begins in one of the earliest stages of the cortical visual stream. Copyright © 2017 Elsevier Inc. All rights reserved.
The visual span (or ‘‘uncrowded window’’), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers’ VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. © 2014 ARVO.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Full Text Available A number of recent studies consistently show an area, known as the visual word form area (VWFA, in the left fusiform gyrus that is selectively responsive for visual words in alphabetic scripts as well as in logographic scripts, such as Chinese characters. However, given the large difference between Chinese characters and alphabetic scripts in terms of their orthographic rules, it is not clear at a fine spatial scale, whether Chinese characters engage the same VWFA in the occipito-temporal cortex as alphabetic scripts. We specifically compared Chinese with Korean script, with Korean script serving as a good example of alphabetic writing system, but matched to Chinese in the overall square shape. Sixteen proficient early Chinese-Korean bilinguals took part in the fMRI experiment. Four types of stimuli (Chinese characters, Korean characters, line drawings and unfamiliar Chinese faces were presented in a block-design paradigm. By contrasting characters (Chinese or Korean to faces, presumed VWFAs could be identified for both Chinese and Korean characters in the left occipito-temporal sulcus in each subject. The location of peak response point in these two VWFAs were essentially the same. Further analysis revealed a substantial overlap between the VWFA identified for Chinese and that for Korean. At the group level, there was no significant difference in amplitude of response to Chinese and Korean characters. Spatial patterns of response to Chinese and Korean are similar. In addition to confirming that there is an area in the left occipito-temporal cortex that selectively responds to scripts in both Korean and Chinese in early Chinese-Korean bilinguals, our results show that these two scripts engage essentially the same VWFA, even at the level of fine spatial patterns of activation across voxels. These results suggest that similar populations of neurons are engaged in processing the different scripts within the same VWFA in early bilinguals.
, the aesthetic component seems to have been of less concern to the church fathers. Only at the beginning of the sixth century did the topic of aesthetic value begin to figure in Christian writings. Pseudo-Dionysius the Areopagite made some important observations on aesthetics in his description of the gnoseological function of symbolic images. He felt that visual symbols were the most appropriate instruments for learning about God Himself (who is beyond any definition or description that words can provide because they could at least evoke some idea of His divine nature. However, what was new in the evaluation of symbols in their gnoseological function was the idea that the beauty of these images stimulates the mind to strive to attain knowledge of the divine order that rules the universe. Visual communication and the visual arts thus cease to be regarded as mere aids to the verbal message—a sort of picture-book for the ignorant “who read in them what they cannot read in books”—and begin to be considered autonomous media that by far transcend their didactic religious function.
Christianson, Kiel; Zhou, Peiyun; Palmer, Cassie; Raizen, Adina
Previous studies suggest that taboo words are special in regards to language processing. Findings from the studies have led to the formation of two theories, global resource theory and binding theory, of taboo word processing. The current study investigates how readers process taboo words embedded in sentences during silent reading. In two experiments, measures collected include eye movement data, accuracy and reaction time measures for recalling probe words within the sentences, and individual differences in likelihood of being offended by taboo words. Although certain aspects of the results support both theories, as the likelihood of a person being offended by a taboo word influenced some measures, neither theory sufficiently predicts or describes the effects observed. The results are interpreted as evidence that processing effects ascribed to taboo words are largely, but not completely, attributable to the context in which they are used and the individual attitudes of the people who hear/read them. The results also demonstrate the importance of investigating taboo words in naturalistic language processing paradigms. A revised theory of taboo word processing is proposed that incorporates both global resource theory and binding theory along with the sociolinguistic factors and individual differences that largely drive the effects observed here. Copyright © 2017 Elsevier B.V. All rights reserved.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual
Full Text Available Children's remarkable ability to map linguistic labels to objects in the world is referred to as fast mapping. The current study examined children's (N = 216 and adults’ (N = 54 retention of fast-mapped words over time (immediately, after a 1 week delay, and after a 1 month delay. The fast mapping literature often characterizes children's retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain general memory processes. Memory processes are critical to children's word learning and the role of one such process, forgetting, is discussed in detail—forgetting supports both word mapping and the generalization of words and categories.
Klepousniotou, E.; Baum, S.R.
Using an auditory semantic priming paradigm, the present study investigated the abilities of left-hemisphere-damaged (LHD) non-fluent aphasic, right-hemisphere-damaged (RHD) and normal control individuals to access, out of context, the multiple meanings of three types of ambiguous words, namely homonyms (e.g., ''punch''), metonymies (e.g.,…
Vast majority of visual anthropologists of the 20th century were more focused on general phenomenology of visual anthropology, i.e. the content aspect of their works and their impact on scientific knowledge, leaving behind style of directing and practical principles & processes of creating anthropological film. So far, judging by the available literature, there are no strict guidelines for directorial procedures, nor the precise definition of determining of the methodical processes in production of an anthropological film. Consequently, the goal of this study is to determine the structure and forms of methodical processes as well as to define the advantages and disadvantages of each of them. By using adequate guidelines, the researcher, i.e. the author of the anthropological film, can optimize the production and post-production processes as soon as in preparation (preproduction) period of working on the film, by the technical choice of the approach to the production (proactive/reactive/subjective/objective...) and by defining the style of directing. In other words, it ultimately means more relevant scientific research result with less time and resources.
Roblyer, M. D.
Introduced to aid writing, word processing can cause unexpected problems for those who use it. Describes four studies in which raters gave word-processed essays consistently lower scores than handwritten essays. Reasons for the discrepancies were higher expectations for typed essays, ease of spotting text errors in typed text, and more difficulty…
Pasquarella, Adrian; Deacon, Helene; Chen, Becky X.; Commissaire, Eva; Au-Yeung, Karen
This study examined the within-language and cross-language relationships between orthographic processing and word reading in French and English across Grades 1 and 2. Seventy-three children in French Immersion completed measures of orthographic processing and word reading in French and English in Grade 1 and Grade 2, as well as a series of control…
Chen, Jenn-Yeu; Chuang, Chun-Yu
A 2003 study by Green and Bavelier showed that action video-game playing modified the visual selective attention of habitual players so the present hypothesis was whether processing of Chinese characters became more phonologically or orthographically oriented depending on whether participants were experienced typing with the phonological (zhuyin) or the orthographic (changjie) word-entry method. In Exp. 1, 38 changjie and 40 zhuyin users typed a short text on a computer using the word-entry method they had experienced. Every keystroke was recorded, and typing errors were categorized. In Exp. 2, 25 changjie and 25 zhuyin users had to circle all characters which contained a predesignated radical when they read a short passage. In Exp. 3, 25 changjie and 20 zhuyin users heard pairs of syllables and had to decide whether the two syllables in a pair shared the same onset consonant in one block of trials or the same rhyme in another block of trials. Analysis showed participants with extensive experience using phonological typing displayed more phonologically related typing errors, better sensitivity to the onset and rhyme of a syllable, but poorer sensitivity to the radical of a character. Participants with extensive experience using orthographic typing displayed opposite results. Although the general cognitive system might be similar in the two groups of participants, the specific configuration of the system can vary to meet the demand of a particular design of the artifactual environment.
Kribbs, Elizabeth E.; Rogowsky, Beth A.
Mathematics word-problems continue to be an insurmountable challenge for many middle school students. Educators have used pictorial and schematic illustrations within the classroom to help students visualize these problems. However, the data shows that pictorial representations can be more harmful than helpful in that they only display objects or…
McBride, Dawn M; Anne Dosher, Barbara
Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.
Buchwald, Adam; Falconer, Carolyn
Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.
Papadopoulos, Judith; Domahs, Frank; Kauschke, Christina
Although it has been established that human beings process concrete and abstract words differently, it is still a matter of debate what factors contribute to this difference. Since concrete concepts are closely tied to sensory perception, perceptual experience seems to play an important role in their processing. The present study investigated the processing of nouns during an auditory lexical decision task. Participants came from three populations differing in their visual-perceptual experience: congenitally blind persons, word-color synesthetes, and sighted non-synesthetes. Specifically, three features with potential relevance to concreteness were manipulated: sensory perception, emotionality, and Husserlian lifeworld, a concept related to the inner versus the outer world of the self. In addition to a classical concreteness effect, our results revealed a significant effect of lifeworld: words that are closely linked to the internal states of humans were processed faster than words referring to the outside world. When lifeworld was introduced as predictor, there was no effect of emotionality. Concerning participants' perceptual experience, an interaction between participant group and item characteristics was found: the effects of both concreteness and lifeworld were more pronounced for blind compared to sighted participants. We will discuss the results in the context of embodied semantics, and we will propose an approach to concreteness based on the individual's bodily experience and the relatedness of a given concept to the self.
Shen, Wei; Li, Xingshan
In the current study, we used eye tracking to investigate whether senses of polysemous words and meanings of homonymous words are represented and processed similarly or differently in Chinese reading. Readers read sentences containing target words which was either homonymous words or polysemous words. The contexts of text preceding the target words were manipulated to bias the participants toward reading the ambiguous words according to their dominant, subordinate, or neutral meanings. Similarly, disambiguating regions following the target words were also manipulated to favor either the dominant or subordinate meanings of ambiguous words. The results showed that there were similar eye movement patterns when Chinese participants read sentences containing homonymous and polysemous words. The study also found that participants took longer to read the target word and the disambiguating text following it when the prior context and disambiguating regions favored divergent meanings rather than the same meaning. These results suggested that homonymy and polysemy are represented similarly in the mental lexicon when a particular meaning (sense) is fully specified by disambiguating information. Furthermore, multiple meanings (senses) are represented as separate entries in the mental lexicon.
Full Text Available In the current study, we used eye tracking to investigate whether senses of polysemous words and meanings of homonymous words are represented and processed similarly or differently in Chinese reading. Readers read sentences containing target words which was either homonymous words or polysemous words. The contexts of text preceding the target words were manipulated to bias the participants toward reading the ambiguous words according to their dominant, subordinate, or neutral meanings. Similarly, disambiguating regions following the target words were also manipulated to favor either the dominant or subordinate meanings of ambiguous words. The results showed that there were similar eye movement patterns when Chinese participants read sentences containing homonymous and polysemous words. The study also found that participants took longer to read the target word and the disambiguating text following it when the prior context and disambiguating regions favored divergent meanings rather than the same meaning. These results suggested that homonymy and polysemy are represented similarly in the mental lexicon when a particular meaning (sense is fully specified by disambiguating information. Furthermore, multiple meanings (senses are represented as separate entries in the mental lexicon.
Hinojosa, José A; Méndez-Bértolo, Constantino; Pozo, Miguel A
Recent data suggest that word valence modulates subsequent cognitive processing. However, the contribution of word arousal is less understood. In this study, behavioral and electrophysiological measures to neutral nouns and pseudowords that were preceded by either a high-arousal or a low-arousal word were recorded during a lexical decision task. Effects were found at an electrophysiological level. Target words and pseudowords elicited enhanced N100 amplitudes when they were preceded by high- compared to low-arousing words. This effect may reflect perceptual potentiation during the allocation of attentional resources when the new stimulus is processed. Enhanced amplitudes in a late positivity when target words and pseudowords followed high-arousal primes were also observed, which could be related to sustained attention during supplementary analyses at a post-lexical level. Copyright © 2012 Elsevier B.V. All rights reserved.
DeWitt, Iain; Rauschecker, Josef P.
Auditory word-form recognition was originally proposed by Wernicke to occur within left superior temporal gyrus (STG), later further specified to be in posterior STG. To account for clinical observations (specifically paraphasia), Wernicke proposed his sensory speech center was also essential for correcting output from frontal speech-motor regions. Recent work, in contrast, has established a role for anterior STG, part of the auditory ventral stream, in the recognition of species-specific voc...
Heather Raye Dial
Introduction Dissociations between preserved word recognition with impaired phoneme perception have long been noted (e.g., Blumstein, Cooper, Zurif & Caramazza, 1977; Miceli, Gainotti, Caltagirone, & Masullo, 1980). This dissociation is surprising given the assumption that word perception depends on phoneme perception. Consequently, some researchers have claimed that different perceptual processes are involved in phoneme and word perception (Blumstein et al., 1977) and that there are two...
Bailey, April H; Kelly, Spencer D
Judging others' power facilitates successful social interaction. Both gender and body posture have been shown to influence judgments of another's power. However, little is known about how these two cues interact when they conflict or how they influence early processing. The present study investigated this question during very early processing of power-related words using event-related potentials (ERPs). Participants viewed images of women and men in dominant and submissive postures that were quickly followed by dominant or submissive words. Gender and posture both modulated neural responses in the N2 latency range to dominant words, but for submissive words they had little impact. Thus, in the context of dual-processing theories of person perception, information extracted from both behavior (i.e., posture) and from category membership (i.e., gender) are recruited side-by-side to impact word processing.
Spencer, Janine V; O'Brien, Justin M D
People with autism have a number of reported deficits in object recognition and global processing. Is there a low-level spatial integration deficit associated with this? We measured spatial-form-coherence detection thresholds using a Glass stimulus in a field of random dots, and compared performance to a similar motion-coherence task. A coherent visual patch was depicted by dots separated by a rotational transformation in space (form) or space-time (motion). To measure parallel visual integration, stimuli were presented for only 250 ms. We compared detection thresholds for children with autism, children with Asperger syndrome, and a matched control group. Children with autism showed a significant form-coherence deficit and a significant motion-coherence deficit, while the performance of the children with Asperger syndrome did not differ significantly from that of controls on either task.
Fernández, Gerardo; Sapognikoff, Marcelo; Guinjoan, Salvador; Orozco, David; Agamennoni, Osvaldo
The current study analyze the effect of word properties (i.e., word length, word frequency and word predictability) on the eye movement behavior of patients with schizophrenia (SZ) compared to age-matched controls. 18 SZ patients and 40 age matched controls participated in the study. Eye movements were recorded during reading regular sentences by using the eyetracking technique. Eye movement analyses were performed using linear mixed models. Analysis of eye movements revealed that patients with SZ decreased the amount of single fixations, increased their total number of second pass fixations compared with healthy individuals (Controls). In addition, SZ patients showed an increase in gaze duration, compared to Controls. Interestingly, the effects of current word frequency and current word length processing were similar in Controls and SZ patients. The high rate of second pass fixations and its low rate in single fixation might reveal impairments in working memory when integrating neighbor words. In contrast, word frequency and length processing might require less complex mechanisms, which were functioning in SZ patients. To the best of our knowledge, this is the first study measuring how patients with SZ process dynamically well-defined words embedded in regular sentences. The findings suggest that evaluation of the resulting changes in eye movement behavior may supplement current symptom-based diagnosis. Copyright © 2016 Elsevier Inc. All rights reserved.
Apfelbaum, Keith S.; McMurray, Bob
Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed…
Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina
In consistent orthographies, isolated reading disorders (iRD) and isolated spelling disorders (iSD) are nearly as common as combined reading-spelling disorders (cRSD). However, the exact nature of the underlying word processing deficits in isolated versus combined literacy deficits are not well understood yet. We applied a phonological lexical decision task (including words, pseudohomophones, legal and illegal pseudowords) during ERP recording to investigate the neurophysiological correlates of lexical and sublexical word-processing in children with iRD, iSD and cRSD compared to typically developing (TD) 9-year-olds. TD children showed enhanced early sensitivity (N170) for word material and for the violation of orthographic rules compared to the other groups. Lexical orthographic effects (higher LPC amplitude for words than for pseudohomophones) were the same in the TD and iRD groups, although processing took longer in children with iRD. In the iSD and cRSD groups, lexical orthographic effects were evident and stable over time only for correctly spelled words. Orthographic representations were intact in iRD children, but word processing took longer compared to TD. Children with spelling disorders had partly missing orthographic representations. Our study is the first to specify the underlying neurophysiology of word processing deficits associated with isolated literacy deficits. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.
Velan, Hadas; Frost, Ram
Recent studies suggest that basic effects which are markers of visual word recognition in Indo-European languages cannot be obtained in Hebrew or in Arabic. Although Hebrew has an alphabetic writing system, just like English, French, or Spanish, a series of studies consistently suggested that simple form-orthographic priming, or…
Giezen, Marcel R; Baker, Anne E; Escudero, Paola
The effect of using signed communication on the spoken language development of deaf children with a cochlear implant (CI) is much debated. We report on two studies that investigated relationships between spoken word and sign processing in children with a CI who are exposed to signs in addition to spoken language. Study 1 assessed rapid word and sign learning in 13 children with a CI and found that performance in both language modalities correlated positively. Study 2 tested the effects of using sign-supported speech on spoken word processing in eight children with a CI, showing that simultaneously perceiving signs and spoken words does not negatively impact their spoken word recognition or learning. Together, these two studies suggest that sign exposure does not necessarily have a negative effect on speech processing in some children with a CI.
We propose a new model-based approach linking word learning to the age of acquisition (AoA) of words; a new computational tool for understanding the relationships among word learning processes, psychological attributes, and word AoAs as measures of vocabulary growth. The computational model developed describes the distinct statistical relationships between three theoretical factors underpinning word learning and AoA distributions. Simply put, this model formulates how different learning processes, characterized by change in learning rate over time and/or by the number of exposures required to acquire a word, likely result in different AoA distributions depending on word type. We tested the model in three respects. The first analysis showed that the proposed model accounts for empirical AoA distributions better than a standard alternative. The second analysis demonstrated that the estimated learning parameters well predicted the psychological attributes, such as frequency and imageability, of words. The third analysis illustrated that the developmental trend predicted by our estimated learning parameters was consistent with relevant findings in the developmental literature on word learning in children. We further discuss the theoretical implications of our model-based approach.
Lartseva, Alina; Dijkstra, Ton; Kan, Cornelis C.; Buitelaar, Jan K.
This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both groups showed faster responses to emotion words…
Wolff, Susann; Schlesewsky, Matthias; Hirotani, Masako; Bornkessel-Schlesewsky, Ina
We present two ERP studies on the processing of word order variations in Japanese, a language that is suited to shedding further light on the implications of word order freedom for neurocognitive approaches to sentence comprehension. Experiment 1 used auditory presentation and revealed that initial accusative objects elicit increased processing…
Faust, Miriam; Ben-Artzi, Elisheva; Harel, Itay
Previous research suggests that the left hemisphere (LH) focuses on strongly related word meanings; the right hemisphere (RH) may contribute uniquely to the processing of lexical ambiguity by activating and maintaining a wide range of meanings, including subordinate meanings. The present study used the word-lists false memory paradigm [Roediger,…
Martin-Loeches, Manuel; Fernandez, Anabel; Schacht, Annekathrin; Sommer, Werner; Casado, Pilar; Jimenez-Ortega, Laura; Fondevila, Sabela
Whereas most previous studies on emotion in language have focussed on single words, we investigated the influence of the emotional valence of a word on the syntactic and semantic processes unfolding during sentence comprehension, by means of event-related brain potentials (ERP). Experiment 1 assessed how positive, negative, and neutral adjectives…
Research with native speakers indicates that, during word recognition, regularly inflected words undergo parsing that segments them into stems and affixes. In contrast, studies with learners suggest that this parsing may not take place in L2. This study's research questions are: Do L2 Spanish learners store and process regularly inflected,…
Morphy, Paul; Graham, Steve
Since its advent word processing has become a common writing tool, providing potential advantages over writing by hand. Word processors permit easy revision, produce legible characters quickly, and may provide additional supports (e.g., spellcheckers, speech recognition). Such advantages should remedy common difficulties among weaker…
Lee, Sung Hee; Hwang, Mina
Hyperlexia is a syndrome of reading without meaning in individuals who otherwise have pronounced cognitive and language deficits. The present study investigated the quality of word representation and the effects of deficient semantic processing on word and nonword reading of Korean children with hyperlexia; their performances were compared to…
Lartseva, A.V.; Dijkstra, A.F.J.; Kan, C.C.; Buitelaar, J.K.
This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both
Lartseva, A.; Dijkstra, T.; Kan, C.C.; Buitelaar, J.K.
This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both
This article examines a number of synsets in order to identify the word-formation processes used by various linguists in constructing the AWN. Since the English Princeton wordnet was used as the basis for the lexical database in the creation of the African wordnet, various word-formation strategies had to be used to account ...
Vainio, Seppo; Anneli, Pajunen; Hyona, Jukka
This study investigated the effect of the first language (L1) on the visual word recognition of inflected nouns in second language (L2) Finnish by native Russian and Chinese speakers. Case inflection is common in Russian and in Finnish but nonexistent in Chinese. Several models have been posited to describe L2 morphological processing. The unified…
Tan, Jye-Sheng; Yeh, Su-Ling
Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration. (c) 2015 APA, all rights reserved).
Calvo, Manuel G; Meseguer, Enrique
The independent and the combined influence of word length, word frequency, and contextual predictability on eye movements in reading was examined across processing stages under two priming-context conditions. Length, frequency, and predictability were used as predictors in multiple regression analyses, with parafoveal, early, late, and spillover eye movement measures as the dependent variables. There were specific effects of: (a) length, both on where to look (how likely a word was fixated and in which location) and how long to fixate, across all processing stages; (b) frequency, on how long to fixate a word, but not on where to look, at an early processing stage; and (c) predictability, both on how likely a word was fixated and for how long, in late processing stages. The source of influence for predictability was related to global rather than to local contextual priming. The contribution of word length was independent of contextual source. These results are relevant to determine both the time course of the influence of visual, lexical, and contextual factors on eye movements in reading, and which main component of eye movements, that is, location or duration, is affected.
Vergara-Martínez, Marta; Comesaña, Montserrat; Perea, Manuel
Behavioral experiments have revealed that words appearing in many different contexts are responded to faster than words that appear in few contexts. Although this contextual diversity (CD) effect has been found to be stronger than the word-frequency (WF) effect, it is a matter of debate whether the facilitative effects of CD and WF reflect the same underlying mechanisms. The analysis of the electrophysiological correlates of CD may shed some light on this issue. This experiment is the first to examine the ERPs to high- and low-CD words when WF is controlled for. Results revealed that while high-CD words produced faster responses than low-CD words, their ERPs showed larger negativities (225-325 ms) than low-CD words. This result goes in the opposite direction of the ERP WF effect (high-frequency words elicit smaller N400 amplitudes than low-frequency words). The direction and scalp distribution of the CD effect resembled the ERP effects associated with "semantic richness." Thus, while apparently related, CD and WF originate from different sources during the access of lexical-semantic representations.
Gwilliams, L; Marantz, A
Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Leshem, Rotem; Arzouan, Yossi; Armony-Sivan, Rinat
This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170 ms) and late (240-260 ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
Fischer-Baum, Simon; Englebretson, Robert
Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality. Copyright © 2016 Elsevier B.V. All rights reserved.
Krafnick, Anthony J; Tan, Li-Hai; Flowers, D Lynn; Luetje, Megan M; Napoliello, Eileen M; Siok, Wai-Ting; Perfetti, Charles; Eden, Guinevere F
Learning to read is thought to involve the recruitment of left hemisphere ventral occipitotemporal cortex (OTC) by a process of "neuronal recycling", whereby object processing mechanisms are co-opted for reading. Under the same theoretical framework, it has been proposed that the visual word form area (VWFA) within OTC processes orthographic stimuli independent of culture and writing systems, suggesting that it is universally involved in written language. However, this "script invariance" has yet to be demonstrated in monolingual readers of two different writing systems studied under the same experimental conditions. Here, using functional magnetic resonance imaging (fMRI), we examined activity in response to English Words and Chinese Characters in 1st graders in the United States and China, respectively. We examined each group separately and found the readers of English as well as the readers of Chinese to activate the left ventral OTC for their respective native writing systems (using both a whole-brain and a bilateral OTC-restricted analysis). Critically, a conjunction analysis of the two groups revealed significant overlap between them for native writing system processing, located in the VWFA and therefore supporting the hypothesis of script invariance. In the second part of the study, we further examined the left OTC region responsive to each group's native writing system and found that it responded equally to Object stimuli (line drawings) in the Chinese-reading children. In English-reading children, the OTC responded much more to Objects than to English Words. Together, these results support the script invariant role of the VWFA and also support the idea that the areas recruited for character or word processing are rooted in object processing mechanisms of the left OTC. Copyright © 2016 Elsevier Inc. All rights reserved.
Krafnick, Anthony J.; Tan, Li-Hai; Flowers, D. Lynn; Luetje, Megan M.; Napoliello, Eileen M.; Siok, Wai-Ting; Perfetti, Charles; Eden, Guinevere F.
Learning to read is thought to involve the recruitment of left hemisphere ventral occipitotemporal cortex (OTC) by a process of “neuronal recycling”, whereby object processing mechanisms are co-opted for reading. Under the same theoretical framework, it has been proposed that the visual word form area (VWFA) within the OTC processes orthographic stimuli independent of culture and writing systems, suggesting that it is universally involved in written language. However, this “script invariance” has yet to be demonstrated in monolingual readers of two different writing systems studied under the same experimental conditions. Here, using functional magnetic resonance imaging (fMRI), we examined activity in response to English Words and Chinese Characters in 1st graders in the United States and China, respectively. We examined each group separately and found the readers of English as well as the readers of Chinese to activate the left ventral OTC for their respective native writing systems (using both a whole-brain and a bilateral OTC-restricted analysis). Critically, a conjunction analysis of the two groups revealed significant overlap between them for native writing system processing, located in the VWFA and therefore supporting the hypothesis of script invariance. In the second part of the study, we further examined the left OTC region responsive to each group’s native writing system and found it responded equally to Object stimuli (line drawings) in the Chinese-reading children. In English-reading children, the OTC responded much more to Objects than to English Words. Together, these results support the script invariant role of the VWFA and also support the idea that the areas recruited for character or word processing are rooted in object processing mechanisms of the left OTC. PMID:27012502
Obregon, Mateo; Shillcock, Richard
Recognition of a single word is an elemental task in innumerable cognitive psychology experiments, but involves unexpected complexity. We test a controversial claim that the human fovea is vertically divided, with each half projecting to either the contralateral or ipsilateral hemisphere, thereby influencing foveal word recognition. We report a…
Wilson, Maximiliano A.; Cuetos, Fernando; Davies, Rob; Burani, Cristina
Word age-of-acquisition (AoA) affects reading. The mapping hypothesis predicts AoA effects when input--output mappings are arbitrary. In Spanish, the orthography-to-phonology mappings required for word naming are consistent; therefore, no AoA effects are expected. Nevertheless, AoA effects have been found, motivating the present investigation of…
Full Text Available Both emotion and reward are primary modulators of cognition: Emotional word content enhances word processing, and reward expectancy similarly amplifies cognitive processing from the perceptual up to the executive control level. Here, we investigate how these primary regulators of cognition interact. We studied how the anticipation of gain or loss modulates the neural time course (event-related potentials, ERPs related to processing of emotional words. Participants performed a semantic categorization task on emotional and neutral words, which were preceded by a cue indicating that performance could lead to monetary gain or loss. Emotion-related and reward-related effects occurred in different time windows, did not interact statistically, and showed different topographies. This speaks for an independence of reward expectancy and the processing of emotional word content. Therefore, privileged processing given to emotionally valenced words seems immune to short-term modulation of reward. Models of language comprehension should be able to incorporate effects of reward and emotion on language processing, and the current study argues for an architecture in which reward and emotion do not share a common neurobiological mechanism.
Mogey, Nora; Hartley, James
There is much debate about whether or not these days students should be able to word-process essay-type examinations as opposed to handwriting them, particularly when they are asked to word-process everything else. This study used word-processing software to examine the stylistic features of 13 examination essays written by hand and 24 by…
Full Text Available The present study examined whether processing words with affective connotations in a listener’s native language may be modulated by accented speech. To address this question, we used the Event Related Potential (ERP technique and recorded the cerebral activity of Spanish native listeners, who performed a semantic categorization task, while listening to positive, negative and neutral words produced in standard Spanish or in four foreign accents. The behavioural results yielded longer latencies for emotional than for neutral words in both native and foreign accented speech, with no difference between positive and negative words. The electrophysiological results replicated previous findings from the emotional language literature, with the amplitude of the Late Positive Complex (LPC, associated with emotional language processing, being larger (more positive for emotional than for neutral words at posterior scalp sites. Interestingly, foreign accented speech was found to interfere with the processing of positive valence and go along with a negativity bias, possibly suggesting heightened attention to negative words. The manipulation employed in the present study provides an interesting perspective on the effects of accented speech on processing affective-laden information. It shows that higher order semantic processes that involve emotion-related aspects are sensitive to a speaker’s accent.
Full Text Available Abstract Background To date, the neural correlates of phonological word stress processing are largely unknown. Methods In the present study, we investigated the processing of word stress and vowel quality using an identity matching task with pseudowords. Results In line with previous studies, a bilateral fronto-temporal network comprising the superior temporal gyri extending into the sulci as well as the inferior frontal gyri was observed for word stress processing. Moreover, we found differences in the superior temporal gyrus and the superior temporal sulcus, bilaterally, for the processing of different stress patterns. For vowel quality processing, our data reveal a substantial contribution of the left intraparietal cortex. All activations were modulated by task demands, yielding different patterns for same and different pairs of stimuli. Conclusions Our results suggest that the left superior temporal gyrus represents a basic system underlying stress processing to which additional structures including the homologous cortex site are recruited with increasing difficulty.
Schmalzl, Laura; Nickels, Lyndsey
In contrast to the numerous treatment studies of spoken language deficits, there have been relatively few studies concerned with the treatment of spelling disorders. Among these, there have been only a small number that have targeted specific components of the spelling process. We describe a successful single case treatment study for FME, a woman with acquired dysgraphia, which was conducted within a cognitive neuropsychological framework. Pre-treatment assessment revealed a semantic deficit, impaired access to output orthography and probable additional degradation of the actual representations within the orthographic output lexicon. The treatment study was therefore directed towards relearning spellings by strengthening, and facilitating access to, specific orthographic representations for writing. In order to maximise the functional outcome for FME, treatment was focused on high frequency, irregular words. The treatment programme was carried out in two phases, one without and one with the use of mnemonics, and the results showed a selective training effect with the mnemonics alone. Treatment benefits were item specific but long lasting, and a significant improvement in FME's spelling performance was still evident at 2 months post-treatment. The current study confirms how cognitive neuropsychological theories and methods can be successfully applied to the assessment of acquired spelling impairments, and exemplifies how treatment with carefully designed mnemonics is of benefit if the inability to retrieve orthographic representations for writing is aggravated by a semantic deficit.
Delaney-Busch, Nathaniel; Wilkie, Gianna; Kuperberg, Gina
In this study, we used event-related potentials (ERPs) to examine how dimensions of emotion – valence and arousal – influence different stages of word processing under different task demands. In two experiments, two groups of participants viewed the same single emotional and neutral words while carrying out different tasks. In both experiments, valence (pleasant, unpleasant, and neutral) was fully crossed with arousal (high and low). We found that task made a substantial contribution to how valence and arousal modulated the Late Positive Complex (LPC), which is thought to reflect sustained evaluative processing (particularly of emotional stimuli). When participants performed a semantic categorization task in which emotion was not directly relevant to task performance, the LPC showed a larger amplitude for high-arousal words than low-arousal words, but no effect of valence. In contrast, when participants performed an overt valence categorization task, the LPC showed a large effect of valence (with unpleasant words eliciting the largest positivity), but no effect of arousal. These data show not only that valence and arousal act independently to influence word processing, but that their relative contributions to prolonged evaluative neural processes are strongly influenced by situational demands (and individual differences, as revealed in a subsequent analysis of subjective judgments). PMID:26833048
in dyslexia provide support for a direct route from visual word forms to semantic and articulatory codes. There also seems to be independence in the...Nature). Localization on this order seems appropriate for the studies that we have performed. The fact that we can find consistent localization across...the additional areas in the generate - repeat condition. In fact , when we make such a subtraction, we find activation in several additional areas not
Hilte, M.; Reitsma, P.
Spelling pronunciations are hypothesized to be helpful in building up relatively stable phonologically underpinned orthographic representations, particularly for learning words with irregular phoneme-grapheme correspondences. In a four-week computer-based training, the efficacy of spelling
Frank eDomahs; Marion eGrande; Walter eHuber; Ulrike eDomahs
There are contradicting assumptions and findings on the direction of word stress processing in German. To resolve this question, we asked participants to read tri-syllabic nonwords and stress ambiguous words aloud. Additionally, they also performed a working memory task (2-back task). In nonword reading, participants’ individual working memory capacity was positively correlated with assignment of main stress to the antepenultimate syllable, which is most distant to the word’s right edge, whil...
Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. It is unclear, however, if this is due to a lower threshold...... for perception of words, or a higher speed of processing for words than letters. We have investigated the WSE using methods based on a Theory of Visual Attention. In an experiment using single stimuli (words or letters) presented centrally, we show that the classical WSE is specifically reflected in perceptual...... processing speed: words are simply processed faster than single letters. It is also clear from this experiment, that the word superiority effect can be observed at a large range of exposure durations, from the perceptual threshold to ceiling performance. Intriguingly, when multiple stimuli are presented...
Grossi, G; Coch, D; Coffey-Corina, S; Holcomb, P J; Neville, H J
We employed a visual rhyming priming paradigm to characterize the development of brain systems important for phonological processing in reading. We studied 109 righthanded, native English speakers within eight age groups: 7-8, 9-10, 11-12, 13-14, 15-16, 17-18, 19-20, and 21-23. Participants decided whether two written words (prime-target) rhymed (JUICE-MOOSE) or not (CHAIR-MOOSE). In similar studies of adults, two main event-related potential (ERP) effects have been described: a negative slow wave to primes, larger over anterior regions of the left hemisphere and hypothesized to index rehearsal of the primes, and a negative deflection to targets, peaking at 400-450 msec, maximal over right temporal-parietal regions, larger for nonrhyming than rhyming targets, and hypothesized index phonological matching. In this study, these two ERP effects were observed in all age groups; however, the two effects showed different developmental timecourses. On the one hand, the frontal asymmetry to primes increased with age; moreover, this asymmetry was correlated with reading and spelling scores, even after controlling for age. On the other hand, the distribution and onset of the more posterior rhyming effect (RE) were stable across age groups, suggesting that phonological matching relied on similar neural systems across these ages. Behaviorally, both reaction times and accuracy improved with age. These results suggest that different aspects of phonological processing rely on different neural systems that have different to developmental timecourses.
Full Text Available Producing written words requires central cognitive processes (such as orthographic long-term and working memory as well as more peripheral processes responsible for generating the motor actions needed for producing written words in a variety of formats (handwriting, typing, etc.. In recent years, various functional neuroimaging studies have examined the neural substrates underlying the central and peripheral processes of written word production. This study provides the first quantitative meta-analysis of these studies by applying Activation Likelihood Estimation methods (Turkeltaub et al., 2002. For alphabet languages, we identified 11 studies (with a total of 17 experimental contrasts that had been designed to isolate central and/or peripheral processes of word spelling (total number of participants = 146. Three ALE meta-analyses were carried out. One involved the complete set of 17 contrasts; two others were applied to subsets of contrasts to distinguish the neural substrates of central from peripheral processes. These analyses identified a network of brain regions reliably associated with the central and peripheral processes of word spelling. Among the many significant results, is the finding that the regions with the greatest correspondence across studies were in the left inferior temporal/fusiform gyri and left inferior frontal gyrus. Furthermore, although the angular gyrus has traditionally been identified as a key site within the written word production network, none of the meta-analyses found it to be a consistent site of activation, identifying instead a region just superior/medial to the left angular gyrus in the left posterior intraparietal sulcus. In general these meta-analyses and the discussion of results provide a valuable foundation upon which future studies that examine the neural basis of written word production can build.
Mnguni, Lindelani E
The use of visual models such as pictures, diagrams and animations in science education is increasing. This is because of the complex nature associated with the concepts in the field. Students, especially entrant students, often report misconceptions and learning difficulties associated with various concepts especially those that exist at a microscopic level, such as DNA, the gene and meiosis as well as those that exist in relatively large time scales such as evolution. However the role of visual literacy in the construction of knowledge in science education has not been investigated much. This article explores the theoretical process of visualization answering the question "how can visual literacy be understood based on the theoretical cognitive process of visualization in order to inform the understanding, teaching and studying of visual literacy in science education?" Based on various theories on cognitive processes during learning for science and general education the author argues that the theoretical process of visualization consists of three stages, namely, Internalization of Visual Models, Conceptualization of Visual Models and Externalization of Visual Models. The application of this theoretical cognitive process of visualization and the stages of visualization in science education are discussed.
Wheat, Katherine L.; Cornelissen, Piers L.; Sack, Alexander T.; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo
Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within [approximately]100 ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we…
Whiteley, Louise Emma; Spence, Charles; Haggard, Patrick
modalities, and can be extended to incorporate indirect representations of the body and functional portions of tools. In the present study, we investigate the source of a facilitatory effect of viewing the body on speeded visual discrimination reaction times. Participants responded to identical visual...
Lachmair, Martin; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara
Numerical processing and language processing are both grounded in space. In the present study we investigated whether these are fully independent phenomena, or whether they share a common basis. If number processing activates spatial dimensions that are also relevant for understanding words, then we can expect that processing numbers may influence subsequent lexical access to words. Specifically, if high numbers relate to upper space, then they can be expected to facilitate understanding of words such as bird that are having referents typically found in the upper vertical space. The opposite should hold for low numbers. These should facilitate the understanding of words such as ground referring to entities with referents in the lower vertical space. Indeed, in two experiments we found evidence for such an interaction between number and word processing. By eliminating a contribution of linguistic factors gained from additional investigations on large text corpora, this strongly suggests that understanding numbers and language is based on similar modal representations in the brain. The implications of these findings for a broader perspective on grounded cognition will be discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
The aim of Speech Recognition is to identify with machines what a speaker is saying. This process can recognise sounds (acoustic-phonetic decoding), words (isolated-words recognition) or sentences. Engineers can build such a system only for a specified user or for different speakers. ACHILE is a system based on parallel-distributed processes for speaker-independent acoustic-phonetic decoding and words recognition. This is a speaker-independent isolated-words recognition system without learnin...
Coch, Donna; Maron, Leeza; Wolf, Maryanne; Holcomb, Phillip J
In an investigation of the N400 component, event-related potentials (ERPs) elicited by 4 types of word stimuli (real words, pseudowords, random letter strings, and false fonts) and 3 types of picture stimuli (real pictures, pseudopictures, and picture parts) presented in separate lists were recorded from 10- and 11-year-old children. All types of word stimuli elicited an anteriorly distributed negativity peaking at about 400 msec (antN400). Words and pseudowords elicited similar ERPs, whereas ERPs to letter strings differed from those to both pseudowords and false fonts. All types of picture stimuli elicited dual anterior negativities (N350 and N430). Real pictures and pseudopictures elicited similar ERPs, whereas pseudopictures and picture parts elicited asymmetrical processing. The results are discussed in terms of increased sensitivity to and dependence on context in children.
Lõo, Kaidi; Järvikivi, Juhani; Baayen, R Harald
Estonian is a morphologically rich Finno-Ugric language with nominal paradigms that have at least 28 different inflected forms but sometimes more than 40. For languages with rich inflection, it has been argued that whole-word frequency, as a diagnostic of whole-word representations, should not be predictive for lexical processing. We report a lexical decision experiment, showing that response latencies decrease both with frequency of the inflected form and its inflectional paradigm size. Inflectional paradigm size was also predictive of semantic categorization, indicating it is a semantic effect, similar to the morphological family size effect. These findings fit well with the evidence for frequency effects of word n-grams in languages with little inflectional morphology, such as English. Apparently, the amount of information on word use in the mental lexicon is substantially larger than was previously thought. Copyright © 2018 Elsevier B.V. All rights reserved.
Stewart, Ian B.; Arendt, Dustin L.; Bell, Eric B.; Volkova, Svitlana
Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.
Brase, Julia; Mani, Nivedita
Although bilinguals respond differently to emotionally valenced words in their first language (L1) relative to emotionally neutral words, similar effects of emotional valence are hard to come by in second language (L2) processing. We examine the extent to which these differences in first and second language processing are due to the context in which the 2 languages are acquired: L1 is typically acquired in more naturalistic settings (e.g., family) than L2 (e.g., at school). Fifty German-English bilinguals learned unfamiliar German and English negative and neutral words in 2 different learning conditions: One group (emotion video context) watched videos of a person providing definitions of the words with facial and gestural cues, whereas another group (neutral video context) received the same definitions without gestural and emotional cues. Subsequently, participants carried out an emotional Stroop task, a sentence completion task, and a recall task on the words they had just learned. We found that the effect of learning context on the influence of emotional valence on responding was modulated by a) language status, L1 versus L2, and b) task requirement. We suggest that a more nuanced approach is required to capture the differences in emotion effects in the speed versus accuracy of access to words across different learning contexts and different languages, in particular with regard to our finding that bilinguals respond to L2 words in a similar manner as L1 words provided that the learning context is naturalistic and incorporates emotional and prosodic cues. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Pexman, P M; Lupker, S J
In a lexical-decision task (LDT), Hino and Lupker (1996) reported a polysemy effect (faster response times for polysemous words [e.g., BANK]), and attributed this effect to enhanced feedback from the semantic system to orthographic units, for polysemous words. Using the same task, Pexman, Lupker, and Jared (in review) reported a homophone effect (slower response times for homophonic words [e.g., MAID]) and attributed this effect to inconsistent feedback from the phonological system to orthographic units, for homophones. In the present paper we test two predictions derived from this feedback explanation: Polysemy and homophone effects should (a) co-occur in a standard LDT (with pseudoword foils) and (b) both be larger with pseudohomophones (e.g., BRANE) as foils in LDT. The results supported both predictions.
Hodge, Milton H.; Britton, Bruce K.
Previous research by A. I. Schulman argued that an observed systematic decline in recognition memory in long word lists was due to the build-up of input and output proactive interference (PI). It also suggested that input PI resulted from process automatization; that is, each list item was processed or encoded in much the same way, producing a set…
Kristensen, Line Burholt; Engberg-Pedersen, Elisabeth; Nielsen, Andreas Højlund
In languages that have subject-before-object as their canonical word order, e.g. German, English and Danish, behavioral experiments have shown more processing difficulties for object-initial clauses (OCs) than for subject-initial clauses (SCs). For processing of OCs in such languages, neuroimagin...
Bobrik, R.; Bauer, T.; Reichert, M.U.; Bauknecht, K.; Pröll, B.; Werthner, H.
A monitoring component is a much-needed module in order to provide an integrated view on system-spanning and cross-organizational business processes. Current monitoring tools, however, do not offer adequate process visualization support. In particular, processes are always visualized in the way they
Citron, Francesca M M; Abugaber, David; Herbert, Cornelia
The affective dimensions of emotional valence and emotional arousal affect processing of verbal and pictorial stimuli. Traditional emotional theories assume a linear relationship between these dimensions, with valence determining the direction of a behavior (approach vs. withdrawal) and arousal its intensity or strength. In contrast, according to the valence-arousal conflict theory, both dimensions are interactively related: positive valence and low arousal (PL) are associated with an implicit tendency to approach a stimulus, whereas negative valence and high arousal (NH) are associated with withdrawal. Hence, positive, high-arousal (PH) and negative, low-arousal (NL) stimuli elicit conflicting action tendencies. By extending previous research that used several tasks and methods, the present study investigated whether and how emotional valence and arousal affect subjective approach vs. withdrawal tendencies toward emotional words during two novel tasks. In Study 1, participants had to decide whether they would approach or withdraw from concepts expressed by written words. In Studies 2 and 3 participants had to respond to each word by pressing one of two keys labeled with an arrow pointing upward or downward. Across experiments, positive and negative words, high or low in arousal, were presented. In Study 1 (explicit task), in line with the valence-arousal conflict theory, PH and NL words were responded to more slowly than PL and NH words. In addition, participants decided to approach positive words more often than negative words. In Studies 2 and 3, participants responded faster to positive than negative words, irrespective of their level of arousal. Furthermore, positive words were significantly more often associated with "up" responses than negative words, thus supporting the existence of implicit associations between stimulus valence and response coding (positive is up and negative is down). Hence, in contexts in which participants' spontaneous responses are
Yeh, Su-Ling; He, Sheng; Cavanagh, Patrick
Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment.
Huisingh, Carrie; McGwin, Gerald; Owsley, Cynthia
Many studies on vision and driving cessation have relied on measures of sensory function, which are insensitive to the higher-order cognitive aspects of visual processing. The purpose of this study was to examine the association between traditional measures of visual sensory function and higher-order visual processing skills with incident driving cessation in a population-based sample of older drivers. Two thousand licensed drivers aged 70 years or older were enrolled and followed for three years. Tests for central vision and visual processing were administered at baseline and included visual acuity, contrast sensitivity, sensitivity in the driving visual field, visual processing speed (useful field of view [UFOV] Subtest 2 and Trails B) and spatial ability measured by the Visual Closure Subtest of the Motor-free Visual Perception Test. Participants self-reported the month and year of driving cessation and provided a reason for cessation. Cox proportional hazards models were used to generate crude and adjusted hazard ratios with 95% confidence intervals between visual functioning characteristics and risk of driving cessation over a three-year period. During the study period, 164 participants stopped driving, which corresponds to a cumulative incidence of 8.5 per cent. Impaired contrast sensitivity, visual fields, visual processing speed (UFOV and Trails B) and spatial ability were significant risk factors for subsequent driving cessation after adjusting for age, gender, marital status, number of medical conditions and miles driven. Visual acuity impairment was not associated with driving cessation. Medical problems (63 per cent), specifically musculoskeletal and neurological problems, as well as visual problems (17 per cent) were cited most frequently as the reason for driving cessation. Assessment of cognitive and visual functioning can provide useful information about subsequent risk of driving cessation among older drivers. In addition, a variety of factors
Venker, Courtney E.
Deficits in visual disengagement are one of the earliest emerging differences in infants who are later diagnosed with autism spectrum disorder. Although researchers have speculated that deficits in visual disengagement could have negative effects on the development of children with autism spectrum disorder, we do not know which skills are…
Eddington, Chelsea M; Tokowicz, Natasha
The majority of words in the English language do not correspond to a single meaning, but rather correspond to two or more unrelated meanings (i.e., are homonyms) or multiple related senses (i.e., are polysemes). It has been proposed that the different types of "semantically-ambiguous words" (i.e., words with more than one meaning) are processed and represented differently in the human mind. Several review papers and books have been written on the subject of semantic ambiguity (e.g., Adriaens, Small, Cottrell, & Tanenhaus, 1988; Burgess & Simpson, 1988; Degani & Tokowicz, 2010; Gorfein, 1989, 2001; Simpson, 1984). However, several more recent studies (e.g., Klein & Murphy, 2001; Klepousniotou, 2002; Klepousniotou & Baum, 2007; Rodd, Gaskell, & Marslen-Wilson, 2002) have investigated the role of the semantic similarity between the multiple meanings of ambiguous words on processing and representation, whereas this was not the emphasis of previous reviews of the literature. In this review, we focus on the current state of the semantic ambiguity literature that examines how different types of ambiguous words influence processing and representation. We analyze the consistent and inconsistent findings reported in the literature and how factors such as semantic similarity, meaning/sense frequency, task, timing, and modality affect ambiguous word processing. We discuss the findings with respect to recent parallel distributed processing (PDP) models of ambiguity processing (Armstrong & Plaut, 2008, 2011; Rodd, Gaskell, & Marslen-Wilson, 2004). Finally, we discuss how experience/instance-based models (e.g., Hintzman, 1986; Reichle & Perfetti, 2003) can inform a comprehensive understanding of semantic ambiguity resolution.
Leerdam, M. van; Bosman, A.M.T.; Groot, A.M.B. de
Three experiments investigated whether perception of a spelling-to-sound inconsistent word such as MOOD involves coding of inappropriate phonology caused by knowledge of enemy neighbors (e.g., BLOOD) in non-native speakers. In a new bimodal matching task, Dutch-English bilinguals judged the
Messbauer, V.C.S.; de Jong, P.F.
Verbal and non-verbal learning were investigated in 21 8-11-year-old dyslexic children and chronological-age controls, and in 21 7-9-year-old reading-age controls. Tasks involved the paired associate learning of words, nonwords, or symbols with pictures. Both learning and retention of associations
Background: Dyslexics read concrete words better than abstract ones. As a result, one of the major problems facing dyslexics is the fact that only part of the information that they require to communicate is concrete, i.e. can easily be pictured. Method: The experiment involved dyslexic third-grade, English-speaking children (8-year-olds) divided…
Mathews, Sarah A.
This article highlights the use of Shaun Tan's "The Arrival" to teach literacy to English Language Learners in social studies classrooms. The featured text is a book that displays the complexity of migration within a text that does not feature a single written word. The author describes a variety of mini-lessons geared towards…
Chun, J-W; Choi, J; Cho, H; Lee, S-K; Kim, D J
Although the Internet is an important tool in our daily life, the control of Internet use is necessary to address difficult problems. This study set out with the aim of assessing the cognitive control of affective events in Internet gaming disorder (IGD) and has examined the influence of IGD on neural activities with regard to swear words in young adolescents. We demonstrated the differences between adolescents with IGD and healthy control adolescents (HC) with respect to swear, negative and neutral word conditions. Swear words induced more activation in regions related to social interaction and emotional processing such as the superior temporal sulcus, right temporoparietal junction and orbitofrontal cortex (OFC) when compared with negative words. In this study, adolescents with IGD exhibited reduced activation in the right OFC related to cognitive control and in the dorsal anterior cingulate cortex (dACC) related to social rejection during the swear word condition. In addition, adolescents with IGD were negatively correlated with activity in the right amygdala toward swear words, indicating the important role of the amygdala in the control of aggression in adolescents with IGD. These findings enhance our understanding of social–emotional perception in adolescents with IGD. PMID:26305475
Chun, J-W; Choi, J; Cho, H; Lee, S-K; Kim, D J
Although the Internet is an important tool in our daily life, the control of Internet use is necessary to address difficult problems. This study set out with the aim of assessing the cognitive control of affective events in Internet gaming disorder (IGD) and has examined the influence of IGD on neural activities with regard to swear words in young adolescents. We demonstrated the differences between adolescents with IGD and healthy control adolescents (HC) with respect to swear, negative and neutral word conditions. Swear words induced more activation in regions related to social interaction and emotional processing such as the superior temporal sulcus, right temporoparietal junction and orbitofrontal cortex (OFC) when compared with negative words. In this study, adolescents with IGD exhibited reduced activation in the right OFC related to cognitive control and in the dorsal anterior cingulate cortex (dACC) related to social rejection during the swear word condition. In addition, adolescents with IGD were negatively correlated with activity in the right amygdala toward swear words, indicating the important role of the amygdala in the control of aggression in adolescents with IGD. These findings enhance our understanding of social-emotional perception in adolescents with IGD.
Kamil Konrad Imbir
Full Text Available We distinguish two evaluative systems which evoke automatic and reflective emotions. Automatic emotions are direct reactions to stimuli whereas reflective emotions are always based on verbalized (and often abstract criteria of evaluation. We conducted an electroencephalography (EEG study in which 25 women were required to read and respond to emotional words which engaged either the automatic or reflective system. Stimulus words were emotional (positive or negative and neutral. We found an effect of valence on an early response with dipolar fronto-occipital topography; positive words evoked a higher amplitude response than negative words. We also found that topographically specific differences in the amplitude of the late positive complex were related to the system involved in processing. Emotional stimuli engaging the automatic system were associated with significantly higher amplitudes in the left-parietal region; the response to neutral words was similar regardless of the system engaged. A different pattern of effects was observed in the central region, neutral stimuli engaging the reflective system evoked a higher amplitudes response whereas there was no system effect for emotional stimuli. These differences could not be reduced to effects of differences between the arousing properties and concreteness of the words used as stimuli.
Imbir, Kamil Konrad; Jarymowicz, Maria Teresa; Spustek, Tomasz; Kuś, Rafał; Żygierewicz, Jarosław
We distinguish two evaluative systems which evoke automatic and reflective emotions. Automatic emotions are direct reactions to stimuli whereas reflective emotions are always based on verbalized (and often abstract) criteria of evaluation. We conducted an electroencephalography (EEG) study in which 25 women were required to read and respond to emotional words which engaged either the automatic or reflective system. Stimulus words were emotional (positive or negative) and neutral. We found an effect of valence on an early response with dipolar fronto-occipital topography; positive words evoked a higher amplitude response than negative words. We also found that topographically specific differences in the amplitude of the late positive complex were related to the system involved in processing. Emotional stimuli engaging the automatic system were associated with significantly higher amplitudes in the left-parietal region; the response to neutral words was similar regardless of the system engaged. A different pattern of effects was observed in the central region, neutral stimuli engaging the reflective system evoked a higher amplitudes response whereas there was no system effect for emotional stimuli. These differences could not be reduced to effects of differences between the arousing properties and concreteness of the words used as stimuli. PMID:25955719
Imbir, Kamil Konrad; Jarymowicz, Maria Teresa; Spustek, Tomasz; Kuś, Rafał; Żygierewicz, Jarosław
We distinguish two evaluative systems which evoke automatic and reflective emotions. Automatic emotions are direct reactions to stimuli whereas reflective emotions are always based on verbalized (and often abstract) criteria of evaluation. We conducted an electroencephalography (EEG) study in which 25 women were required to read and respond to emotional words which engaged either the automatic or reflective system. Stimulus words were emotional (positive or negative) and neutral. We found an effect of valence on an early response with dipolar fronto-occipital topography; positive words evoked a higher amplitude response than negative words. We also found that topographically specific differences in the amplitude of the late positive complex were related to the system involved in processing. Emotional stimuli engaging the automatic system were associated with significantly higher amplitudes in the left-parietal region; the response to neutral words was similar regardless of the system engaged. A different pattern of effects was observed in the central region, neutral stimuli engaging the reflective system evoked a higher amplitudes response whereas there was no system effect for emotional stimuli. These differences could not be reduced to effects of differences between the arousing properties and concreteness of the words used as stimuli.
Tan, Eric J; Yelland, Gregory W; Rossell, Susan L
Language dysfunction is proposed to relate to the speech disturbances in schizophrenia, which are more commonly referred to as formal thought disorder (FTD). Presently, language production deficits in schizophrenia are better characterised than language comprehension difficulties. This study thus aimed to examine three aspects of language comprehension in schizophrenia: (1) the role of lexical processing, (2) meaning attribution for words and sentences, and (3) the relationship between comprehension and production. Fifty-seven schizophrenia/schizoaffective disorder patients and 48 healthy controls completed a clinical assessment and three language tasks assessing word recognition, synonym identification, and sentence comprehension. Poorer patient performance was expected on the latter two tasks. Recognition of word form was not impaired in schizophrenia, indicating intact lexical processing. Whereas single-word synonym identification was not significantly impaired, there was a tendency to attribute word meanings based on phonological similarity with increasing FTD severity. Importantly, there was a significant sentence comprehension deficit for processing deep structure, which correlated with FTD severity. These findings established a receptive language deficit in schizophrenia at the syntactic level. There was also evidence for a relationship between some aspects of language comprehension and speech production/FTD. Apart from indicating language as another mechanism in FTD aetiology, the data also suggest that remediating language comprehension problems may be an avenue to pursue in alleviating FTD symptomatology.
Hsiao, Janet H; Liu, Tianyin
Previous studies have shown a right-visual-field (RVF)/left-hemisphere (LH) advantage in Chinese phonetic compound pronunciation. Here, we contrast the processing of two phonetic compound types: a dominant structure in which a semantic component appears on the left and a phonetic component on the right (SP characters), and a minority structure with the opposite arrangement (PS characters). We show that this RVF/LH advantage was observed only in SP character pronunciation, but not in PS character pronunciation. This result suggests that SP character processing is more LH lateralized than is PS character processing and is consistent with corresponding ERP N170 data. This effect may be due to the dominance of SP characters in the lexicon, which makes readers opt to obtain phonological information from the right of the characters. This study thus shows that the overall information distribution of word components in the lexicon may influence how written words are processed in the brain. Supplemental materials for this article may be downloaded from http://cabn.psychonomic-journals.org/content/supplemental.
Cai, Wei; Lee, Benny P. H.
This study examines the effect of contextual clues on the use of strategies (inferencing and ignoring) and knowledge sources (semantics, morphology, world knowledge, and others) for processing unfamiliar words in listening comprehension. Three types of words were investigated: words with local co-text clues, global co-text clues and extra-textual…
Jiang, Tianjiao; Sun, Lining; Zhu, Lei
There is increasing evidence demonstrating that power judgment is affected by vertical information. Such interaction between vertical space and power (i.e., response facilitation under space-power congruent conditions) is generally elicited in paradigms that require participants to explicitly evaluate the power of the presented words. The current research explored the possibility that explicit evaluative processing is not a prerequisite for the emergence of this effect. Here we compared the influence of vertical information on a standard explicit power evaluation task with influence on a task that linked power with stimuli in a more incidental manner, requiring participants to report whether the words represented people or animals or the font of the words. The results revealed that although the effect is more modest, the interaction between responses and power is also evident in an incidental task. Furthermore, we also found that explicit semantic processing is a prerequisite to ensure such an effect. Copyright © 2015 Elsevier Inc. All rights reserved.
Boonen, A.J.H.; van Wesel, F.; Jolles, J.; van der Schoot, M.
This study examined the role of visual representation type, spatial ability, and reading comprehension in word problem solving in 128 sixth-grade students by using primarily an item-level approach rather than a test-level approach. We revealed that compared to students who did not make a visual
de Oliveira, Darlene G; da Silva, Patrícia B; Dias, Natália M; Seabra, Alessandra G; Macedo, Elizeu C
The cognitive model of reading comprehension (RC) posits that RC is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the RC model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years) were divided in a Dyslexic Group (DG; 18 children, MA = 10.78, SD = 1.66) and control group (CG 22 children, MA = 10.59, SD = 1.86). All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and RC, word recognition, processing speed, picture naming, receptive vocabulary, and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and RC, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items) and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on RC test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.
Full Text Available As we discuss, a stationary stochastic process is nonergodic when a random persistent topic can be detected in the infinite random text sampled from the process, whereas we call the process strongly nonergodic when an infinite sequence of independent random bits, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we adapt this property back to ergodic processes. Subsequently, we call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. We present a simple example of such a process. Moreover, we demonstrate an assertion which we call the theorem about facts and words. This proposition states that the number of probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the Prediction by Partial Matching (PPM compression algorithm. We also observe that the number of the word-like strings for a sample of plays by Shakespeare follows an empirical stepwise power law, in a stark contrast to Markov processes. Hence, we suppose that natural language considered as a process is not only non-Markov but also perigraphic.
Jackson, Georgina M; Shepherd, Tracy; Mueller, Sven C; Husain, Masid; Jackson, Stephen R
When presented with two objects patients with simultanagnosia show a marked impairment at naming both items. This has led many authors to conclude that the second item is not being processed (e.g., Robinson, 2003). However, this deficit may instead reflect a deficit with explicit, or conscious report. We investigated this issue using a semantic priming paradigm that allowed us to assess implicit processing of the second "unseen" item. We presented a patient, with bilateral parietal damage, with pairs of pictures that were either from the same or a different semantic category. The patient was asked to either classify one of the pictures or to name both pictures. When the items were from different categories the patient's classification performance was significantly poorer than when they were from the same category, even though he could rarely explicitly report both items. These findings are consistent with the notion that the meaning of the "unseen" item influenced the reporting of the "seen" item. Consequently, the deficit seen in this patient does not seem to reflect an inability to process more than one item simultaneously but rather a deficit in explicitly identifying multiple items.
The ability to fluently and, seemingly effortlessly, read words is one of few uniquely special human attributes, but one which has assumed inordinate significance because of the role that this activity has come to have in modern society. A disadvantage in reading ability not only has profound personal impact for the individuals concerned, but in terms of economic and social problems also has a wider negative influence on society at large. According to current government figures in the UK, som...
Georgiou, George K; Papadopoulos, Timothy C; Zarouna, Elena; Parrila, Rauno
The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological awareness, orthographic processing, short-term memory, rapid automatized naming, auditory and visual processing, and reading fluency to 21 Grade 6 children with dyslexia, 21 chronological age-matched controls and 20 Grade 3 reading age-matched controls. The results indicated that the children with dyslexia did not experience auditory processing deficits, but about half of them showed visual processing deficits. Both orthographic processing and rapid automatized naming deficits were associated with dyslexia in our sample, but it is less clear that they were associated with visual processing deficits. Copyright © 2012 John Wiley & Sons, Ltd.
McCrudden, Matthew T.; Rapp, David N.
We regularly consult and construct visual displays that are intended to communicate important information. The power of these displays and the instructional messages we attempt to comprehend when using them emerge from the information included in the display and by their spatial arrangement. In this article, we identify common types of visual…
Pitt, M A; Smith, K L; Klein, J M
Spoken words have a rich structural organization in memory, consisting of syllabic and subsyllabic representations. A phoneme monitoring paradigm, in which the target phoneme occurs more frequently in one syllabic position than another (e.g., onset of the 2nd syllable vs. the coda of the 1st syllable: neu-tral vs. nut-meg; C. Pallier, N. Sebastian-Galles, T. Felguera, A. Christophe, & J. Mehler, 1993) was used to explore the formation of syllabic structure during word processing. Experiment 2 investigated how a recognition system that uses syllabic structure processes words with unclear syllable boundaries (e.g., pa-lace or pal-ace?). Two methodological issues were explored: The importance of a baseline condition for measuring effects of induction (Experiment 1) and the form of the representation used in the induction paradigm (Experiment 3). Findings suggest that syllabic structure begins to form early in word processing, and they demonstrate the adequacy of the induction procedure for measuring such processes.
Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.
This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…
Weiss, Daniel J.; Gerfen, Chip; Mitchel, Aaron D.
The process of word segmentation is flexible, with many strategies potentially available to learners. This experiment explores how segmentation cues interact, and whether successful resolution of cue competition is related to general executive functioning. Participants listened to artificial speech streams that contained both statistical and…
Babineau, Mireille; Shi, Rushen
We examined how toddlers process lexical ambiguity where different underlying forms are neutralized at the surface level. In a preferential-looking procedure, French-learning 30-month-olds were familiarized with either liaison-ambiguous phrases (i.e., sentences containing a determiner and a non-word, e.g., "ces /z/onches," "these…
McKenna, Peter E.; Glass, Alexandra; Rajendran, Gnanathusharan; Corley, Martin
Previous investigations into metonymy comprehension in ASD have confounded metonymy with anaphora, and outcome with process. Here we show how these confounds may be avoided, using data from non-diagnosed participants classified using Autism Quotient. Participants read sentences containing target words with novel or established metonymic senses…
The purpose of this study was to determine the effects of the combination of word prediction and text-to-speech software on the writing process of translating. Participants for this study included 10 elementary and middle school students who had a diagnosis of disorder of written expression. A modified multiple case series was used to collect data…
Bonnefond, Mathilde; Van der Henst, Jean-Baptiste
This study investigates the ERP components associated with the processing of words that are critical to generating and rejecting deductive conditional Modus Ponens arguments ("If P then Q; P//"Therefore, "Q"). The generation of a logical inference is investigated by placing a verb in the minor premise that matches the one used in the antecedent of…
The majority of words in the English language do not correspond to a single meaning, but rather correspond to two or more unrelated meanings (i.e., are homonyms) or multiple related senses (i.e., are polysemes). It has been proposed that the different types of “semantically-ambiguous words” (i.e., words with more than one meaning) are processed and represented differently in the human mind. Several review papers and books have been written on the subject of semantic ambiguity (e.g., Adriaens, Small, Cottrell, & Tanenhaus, 1988; Burgess & Simpson, 1988; Degani & Tokowicz, 2010; Gorfein, 1989, 2001; Simpson, 1984). However, several more recent studies (e.g., Klein & Murphy, 2001; Klepousniotou, 2002; Klepousniotou & Baum, 2007; Rodd, Gaskell, & Marslen-Wilson, 2002) have investigated the role of the semantic similarity between the multiple meanings of ambiguous words on processing and representation, whereas this was not the emphasis of previous reviews of the literature. In this review, we focus on the current state of the semantic ambiguity literature that examines how different types of ambiguous words influence processing and representation. We analyze the consistent and inconsistent findings reported in the literature and how factors such as semantic similarity, meaning/sense frequency, task, timing, and modality affect ambiguous word processing. We discuss the findings with respect to recent parallel distributed processing (PDP) models of ambiguity processing (Armstrong & Plaut, 2008, 2011; Rodd, Gaskell, & Marslen-Wilson, 2004). Finally, we discuss how experience/instance-based models (e.g., Hintzman, 1986; Reichle & Perfetti, 2003) can inform a comprehensive understanding of semantic ambiguity resolution. PMID:24889119
Lund, Emily; Schuele, C Melanie
The purpose of this study was to compare types of maternal auditory-visual input about word referents available to children with cochlear implants, children with normal hearing matched for age, and children with normal hearing matched for vocabulary size. Although other works have considered the acoustic qualities of maternal input provided to children with cochlear implants, this study is the first to consider auditory-visual maternal input provided to children with cochlear implants. Participants included 30 mother-child dyads from three groups: children who wore cochlear implants (n = 10 dyads), children matched for chronological age (n = 10 dyads), and children matched for expressive vocabulary size (n = 10 dyads). All participants came from English-speaking families, with the families of children with hearing loss committed to developing listening and spoken language skills (not sign language). All mothers had normal hearing. Mother-child interactions were video recorded during mealtimes in the home. Each dyad participated in two mealtime observations. Maternal utterances were transcribed and coded for (a) nouns produced, (b) child-directed utterances, (c) nouns unknown to children per maternal report, and (d) auditory and visual cues provided about referents for unknown nouns. Auditory and visual cues were coded as either converging, diverging, or auditory-only. Mothers of children with cochlear implants provided percentages of converging and diverging cues that were similar to the percentages of mothers of children matched for chronological age. Mothers of children matched for vocabulary size, on the other hand, provided a higher percentage of converging auditory-visual cues and lower percentage of diverging cues than did mothers of children with cochlear implants. Groups did not differ in provision of auditory-only cues. The present study represents the first step toward identification of environmental input characteristics that may affect lexical learning
Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko
Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium.
Brittain, Philip; Ffytche, Dominic H; McKendrick, Allison; Surguladze, Simon
Visual processing deficits are well recognised in schizophrenia and have potentially important clinical implications. First, the pattern of deficits for different visual tasks may help understand the underlying pathophysiology of the visual dysfunction. Second, several studies report deficits correlating with functional outcomes, suggesting that outcome improvement is possible through visual remediation strategies. We investigated these issues in a group of 64 schizophrenia patients and matched controls with a battery of visual tasks targeting different points along the visual pathways and by examining direct and indirect relationships (via a potential mediator) of such deficits to functional outcome. The schizophrenia group was significantly worse on the visual tasks overall, with the deficit constant for low- and high-level processing. Zero-order correlations suggested minimal association between vision and outcome, however, correlations between three visual tasks and 'social perceptual' ability were found which in turn correlated with functional outcome; path analysis confirmed a significant but small and indirect effect of 'biological motion' processing ability on functional outcome mediated by 'social perception'. In conclusion, the pathophysiology of visual dysfunction affects low- and high-level visual areas similarly and the relationship between deficits and outcome is small and indirect. Copyright 2009 Elsevier Ltd. All rights reserved.
Morton Ninomiya, Melody E
With increased attention to knowledge translation and community engagement in the applied health research field, many researchers aim to find effective ways of engaging health policy and decision makers and community stakeholders. While visual graphics such as graphs, charts, figures and photographs are common in scientific research dissemination, they are less common as a communication tool in research. In this commentary, I illustrate how and why visual graphics were created and used to facilitate dialogue and communication throughout all phases of a community-based health research study with a rural Indigenous community, advancing community engagement and knowledge utilization of a research study. I suggest that it is essential that researchers consider the use of visual graphics to accurately communicate and translate important health research concepts and content in accessible forms for diverse research stakeholders and target audiences.
Ding, Nai; Melloni, Lucia; Tian, Xing; Poeppel, David
To flexibly convey meaning, the human language faculty iteratively combines smaller units such as words into larger structures such as phrases based on grammatical principles. During comprehension, however, it remains unclear how the brain encodes the relationship between words and combines them into phrases. One hypothesis is that internal grammatical principles governing language generation are also used to parse the hierarchical syntactic structure of spoken language during comprehension. An alternative hypothesis suggests, in contrast, that decoding language during comprehension solely relies on statistical relationships between words or strings of words, i.e., the N-gram statistics, while grammatical rules are not used and no hierarchical linguistic structures are constructed. Here, we briefly review distinctions between rule-based hierarchical models and statistics-based linear string models for comprehension, and how the neurolinguistic approach can shed light on this debate. Recent neurolinguistic studies show that tracking of probabilistic relationships between words is not sufficient to explain cortical encoding of linguistic constituent structure and support the involvement of rule-based processing during language comprehension.
Afonso, Olivia; Suárez-Coalla, Paz; González-Martín, Nagore; Cuetos, Fernando
Although several studies have found that the sublexical route of spelling has an effect on handwriting movements, the ability of lexical variables to modulate peripheral processes during writing is less clear. This study addresses the hypothesis that word frequency affects writing durations only during writing acquisition, and that at some point of development, the handwriting system becomes a relatively autonomous system unaffected by lexical variables. Spanish children attending Grades 2, 4, and 6 performed a spelling-to-dictation and a copy task in which word frequency was manipulated. Results revealed that written latencies decreased with age, especially between Grades 2 and 4, and that writing durations decreased between these two groups. All these measures were longer during copying but the effect of task on written latencies and in-air pen trajectories was smaller for older children. Crucially, a significant word frequency effect on writing durations was observed only in Grade 2. This effect was marginally significant in Grade 4 and disappeared in Grade 6. However, all groups showed a similar effect of word frequency on written latencies. These findings suggest that lexical processes impact peripheral processes during writing acquisition and that this influence diminishes to eventually disappear at some point in development.
Caroline M. Whiting
Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.
Riès, Stéphanie; Legou, Thierry; Burle, Borís; Alario, F-Xavier; Malfait, Nicole
Since the 19th century, it has been known that response latencies are longer for naming pictures than for reading words aloud. While several interpretations have been proposed, a common general assumption is that this difference stems from cognitive word-selection processes and not from articulatory processes. Here we show that, contrary to this widely accepted view, articulatory processes are also affected by the task performed. To demonstrate this, we used a procedure that to our knowledge had never been used in research on language processing: response-latency fractionating. Along with vocal onsets, we recorded the electromyographic (EMG) activity of facial muscles while participants named pictures or read words aloud. On the basis of these measures, we were able to fractionate the verbal response latencies into two types of time intervals: premotor times (from stimulus presentation to EMG onset), mostly reflecting cognitive processes, and motor times (from EMG onset to vocal onset), related to motor execution processes. We showed that premotor and motor times are both longer in picture naming than in reading, although than in reading, although articulation is already initiated in the latter measure. Future studies based on this new approach should bring valuable clues for a better understanding of the relation between the cognitive and motor processes involved in speech production.
Smith, Linda B; Colunga, Eliana; Yoshida, Hanako
Learning depends on attention. The processes that cue attention in the moment dynamically integrate learned regularities and immediate contextual cues. This paper reviews the extensive literature on cued attention and attentional learning in the adult literature and proposes that these fundamental processes are likely significant mechanisms of change in cognitive development. The value of this idea is illustrated using phenomena in children novel word learning.
Martensen, H.E.; Maris, E.; Dijkstra, A.F.J.
In one lexical decision and three naming experiments, we established the effect of visually separating two letters that have to be considered jointly for pronunciation. Segmentation effects were studied for digraphic vowels and for ambiguous onset-letter (C) whose pronunciation is determined by the
Albrecht, Thorsten; Vorberg, Dirk
Our ability to identify even complex scenes in rapid serial visual presentation (RSVP) is astounding, but memory for such items seems lacking. Rather than pictures, we used streams of more than 200 verbal stimuli, rushing by on the screen at a rate of more than 12 items per second while participants had to detect infrequent names (Experiments 1…
Von Holzen, Katie; Nishibayashi, Leo-Lyuki; Nazzi, Thierry
Segmentation skill and the preferential processing of consonants (C-bias) develop during the second half of the first year of life and it has been proposed that these facilitate language acquisition. We used Event-related brain potentials (ERPs) to investigate the neural bases of early word form segmentation, and of the early processing of onset consonants, medial vowels, and coda consonants, exploring how differences in these early skills might be related to later language outcomes. Our results with French-learning eight-month-old infants primarily support previous studies that found that the word familiarity effect in segmentation is developing from a positive to a negative polarity at this age. Although as a group infants exhibited an anterior-localized negative effect, inspection of individual results revealed that a majority of infants showed a negative-going response (Negative Responders), while a minority showed a positive-going response (Positive Responders). Furthermore, all infants demonstrated sensitivity to onset consonant mispronunciations, while Negative Responders demonstrated a lack of sensitivity to vowel mispronunciations, a developmental pattern similar to previous literature. Responses to coda consonant mispronunciations revealed neither sensitivity nor lack of sensitivity. We found that infants showing a more mature, negative response to newly segmented words compared to control words (evaluating segmentation skill) and mispronunciations (evaluating phonological processing) at test also had greater growth in word production over the second year of life than infants showing a more positive response. These results establish a relationship between early segmentation skills and phonological processing (not modulated by the type of mispronunciation) and later lexical skills.
Katie Von Holzen
Full Text Available Segmentation skill and the preferential processing of consonants (C-bias develop during the second half of the first year of life and it has been proposed that these facilitate language acquisition. We used Event-related brain potentials (ERPs to investigate the neural bases of early word form segmentation, and of the early processing of onset consonants, medial vowels, and coda consonants, exploring how differences in these early skills might be related to later language outcomes. Our results with French-learning eight-month-old infants primarily support previous studies that found that the word familiarity effect in segmentation is developing from a positive to a negative polarity at this age. Although as a group infants exhibited an anterior-localized negative effect, inspection of individual results revealed that a majority of infants showed a negative-going response (Negative Responders, while a minority showed a positive-going response (Positive Responders. Furthermore, all infants demonstrated sensitivity to onset consonant mispronunciations, while Negative Responders demonstrated a lack of sensitivity to vowel mispronunciations, a developmental pattern similar to previous literature. Responses to coda consonant mispronunciations revealed neither sensitivity nor lack of sensitivity. We found that infants showing a more mature, negative response to newly segmented words compared to control words (evaluating segmentation skill and mispronunciations (evaluating phonological processing at test also had greater growth in word production over the second year of life than infants showing a more positive response. These results establish a relationship between early segmentation skills and phonological processing (not modulated by the type of mispronunciation and later lexical skills.
Full Text Available To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, 1 whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and 2 whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression (CFS, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition, a semantically non-matching soundtrack (incongruent audiovisual condition, or with no soundtrack (neutral video-only condition. We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.
Cox, Dustin; Hong, Sang Wook
To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.
Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf
Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.
Unzueta-Arce, J; García-García, R; Ladera-Fernández, V; Perea-Bartolomé, M V; Mora-Simón, S; Cacho-Gutiérrez, J
Patients who have difficulties recognising visual form stimuli are usually labelled as having visual agnosia. However, recent studies let us identify different clinical manifestations corresponding to discrete diagnostic entities which reflect a variety of deficits along the continuum of cortical visual processing. We reviewed different clinical cases published in medical literature as well as proposals for classifying deficits in order to provide a global perspective of the subject. Here, we present the main findings on the neuroanatomical basis of visual form processing and discuss the criteria for evaluating processing which may be abnormal. We also include an inclusive diagram of visual form processing deficits which represents the different clinical cases described in the literature. Lastly, we propose a boosted decision tree to serve as a guide in the process of diagnosing such cases. Although the medical community largely agrees on which cortical areas and neuronal circuits are involved in visual processing, future studies making use of new functional neuroimaging techniques will provide more in-depth information. A well-structured and exhaustive assessment of the different stages of visual processing, designed with a global view of the deficit in mind, will give a better idea of the prognosis and serve as a basis for planning personalised psychostimulation and rehabilitation strategies. Copyright © 2011 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
Meppelink, Anne Marthe; de Jong, Bauke M.; Renken, Remco; Leenders, Klaus L.; Cornelissen, Frans W.; van Laar, Teus
Impaired visual processing may play a role in the pathophysiology of visual hallucinations in Parkinson's disease. In order to study involved neuronal circuitry, we assessed cerebral activation patterns both before and during recognition of gradually revealed images in Parkinson's disease patients
Full Text Available Reading is an important part of our daily life, and rapid responses to emotional words have received a great deal of research interest. Our study employed rapid serial visual presentation to detect the time course of emotional noun processing using event-related potentials. We performed a dual-task experiment, where subjects were required to judge whether a given number was odd or even, and the category into which each emotional noun fit. In terms of P1, we found that there was no negativity bias for emotional nouns. However, emotional nouns elicited larger amplitudes in the N170 component in the left hemisphere than did neutral nouns. This finding indicated that in later processing stages, emotional words can be discriminated from neutral words. Furthermore, positive, negative, and neutral words were different from each other in the late positive complex, indicating that in the third stage, even different emotions can be discerned. Thus, our results indicate that in a three-stage model the latter two stages are more stable and universal.
Vorobyev, Victor A; Alho, Kimmo; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Roudas, Marina S; Rutkovskaya, Julia M; Tervaniemi, Mari; Van Zuijen, Titia L; Näätänen, Risto
Positron emission tomography (PET) was used to investigate the neural basis of selective processing of linguistic material during concurrent presentation of multiple stimulus streams ("cocktail-party effect"). Fifteen healthy right-handed adult males were to attend to one of three simultaneously presented messages: one presented visually, one to the left ear, and one to the right ear. During the control condition, subjects attended to visually presented consonant letter strings and ignored auditory messages. This paper reports the modality-nonspecific language processing and visual word-form processing, whereas the auditory attention effects have been reported elsewhere [Cogn. Brain Res. 17 (2003) 201]. The left-hemisphere areas activated by both the selective processing of text and speech were as follows: the inferior prefrontal (Brodmann's area, BA 45, 47), anterior temporal (BA 38), posterior insular (BA 13), inferior (BA 20) and middle temporal (BA 21), occipital (BA 18/30) cortices, the caudate nucleus, and the amygdala. In addition, bilateral activations were observed in the medial occipito-temporal cortex and the cerebellum. Decreases of activation during both text and speech processing were found in the parietal (BA 7, 40), frontal (BA 6, 8, 44) and occipito-temporal (BA 37) regions of the right hemisphere. Furthermore, the present data suggest that the left occipito-temporal cortex (BA 18, 20, 37, 21) can be subdivided into three functionally distinct regions in the posterior-anterior direction on the basis of their activation during attentive processing of sublexical orthography, visual word form, and supramodal higher-level aspects of language.
Gilbert, Charles D.; Li, Wu
Reentrant or feedback pathways between cortical areas carry rich and varied information about behavioral context, including attention, expectation, perceptual task, working memory and motor commands. Neurons receiving such inputs effectively function as adaptive processors that are able to assume different functional states according to the task being executed. Recent data suggest that the selection of particular inputs, representing different components of an association field, enable neurons to take on different functional roles. In this review we discuss the various top-down influences exerted on the visual cortical pathways and highlight the dynamic nature of the receptive field, which allows neurons to carry information that is relevant to the current perceptual demands. PMID:23595013
Hanique, Iris; Ernestus, Mirjam; Schuppler, Barbara
This paper investigates the nature of reduction phenomena in informal speech. It addresses the question whether reduction processes that affect many word types, but only if they occur in connected informal speech, may be categorical in nature. The focus is on reduction of schwa in the prefixes and on word-final /t/ in Dutch past participles. More than 2000 tokens of past participles from the Ernestus Corpus of Spontaneous Dutch and the Spoken Dutch Corpus (both from the interview and read speech component) were transcribed automatically. The results demonstrate that the presence and duration of /t/ are affected by approximately the same phonetic variables, indicating that the absence of /t/ is the extreme result of shortening, and thus results from a gradient reduction process. Also for schwa, the data show that mainly phonetic variables influence its reduction but its presence is affected by different and more variables than its duration, which suggests that the absence of schwa may result from gradient as well as categorical processes. These conclusions are supported by the distributions of the segments' durations. These findings provide evidence that reduction phenomena which affect many words in informal conversations may also result from categorical reduction processes.
Bacon, Alison M; Handley, Simon J
Recent research has suggested that individuals with dyslexia rely on explicit visuospatial representations for syllogistic reasoning while most non-dyslexics opt for an abstract verbal strategy. This paper investigates the role of visual processes in relational reasoning amongst dyslexic reasoners. Expt 1 presents written and verbal protocol evidence to suggest that reasoners with dyslexia generate detailed representations of relational properties and use these to make a visual comparison of objects. Non-dyslexics use a linear array of objects to make a simple transitive inference. Expt 2 examined evidence for the visual-impedance effect which suggests that visual information detracts from reasoning leading to longer latencies and reduced accuracy. While non-dyslexics showed the impedance effects predicted, dyslexics showed only reduced accuracy on problems designed specifically to elicit imagery. Expt 3 presented problems with less semantically and visually rich content. The non-dyslexic group again showed impedance effects, but dyslexics did not. Furthermore, in both studies, visual memory predicted reasoning accuracy for dyslexic participants, but not for non-dyslexics, particularly on problems with highly visual content. The findings are discussed in terms of the importance of visual and semantic processes in reasoning for individuals with dyslexia, and we argue that these processes play a compensatory role, offsetting phonological and verbal memory deficits.
Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit
Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic
Juhasz, Barbara J; Johnson, Rebecca L; Brewer, Jennifer
New words enter the language through several word formation processes [see Simonini (Engl J 55:752-757, 1966)]. One such process, blending, occurs when two source words are combined to represent a new concept (e.g., SMOG, BRUNCH, BLOG, and INFOMERCIAL). While there have been examinations of the structure of blends [see Gries (Linguistics 42:639-667, 2004) and Lehrer (Am Speech 73:3-28, 1998)], relatively little attention has been given to how lexicalized blends are recognized and if this process differs from other types of words. In the present study, blend words were matched to non-blend control words on length, familiarity, and frequency. Two tasks were used to examine blend processing: lexical decision and sentence reading. The results demonstrated that blend words were processed differently than non-blend control words. However, the nature of the effect varied as a function of task demands. Blends were recognized slower than control words in the lexical decision task but received shorter fixation durations when embedded in sentences.
Borhani, Khatereh; Làdavas, Elisabetta; Maier, Martin E; Avenanti, Alessio; Bertini, Caterina
.... To investigate at which stage of visual processing emotional and movement-related information conveyed by bodies is discriminated, we examined event-related potentials elicited by laterally presented...
Twenty-seven male subjects were tested in a driving simulator to study the effects of alcohol on visual information processing and allocation of attention. Subjects were required to control heading angle, maintain a constant speed, search for critica...
Lamme, V.A.F.; van Dijk, B.W.; Spekreijse, H.
Investigated which cortical areas and layers are involved in global feature interactions underlying texture segregation in humans and monkeys. Visual stimulation was assessed with an electrostatic monitor, and scalp or intracortical recordings with electrodes were made. Signal processing and
Chabal, Sarah; Marian, Viorica
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (c) 2015 APA, all rights reserved).
Højen, Anders; Nazzi, Thierry
The present study explored whether the phonological bias favoring consonants found in French-learning infants and children when learning new words (Havy & Nazzi, 2009; Nazzi, 2005) is language-general, as proposed by Nespor, Peña and Mehler (2003), or varies across languages, perhaps as a function of the phonological or lexical properties of the language in acquisition. To do so, we used the interactive word-learning task set up by Havy and Nazzi (2009), teaching Danish-learning 20-month-olds pairs of phonetically similar words that contrasted either on one of their consonants or one of their vowels, by either one or two phonological features. Danish was chosen because it has more vowels than consonants, and is characterized by extensive consonant lenition. Both phenomena could disfavor a consonant bias. Evidence of word-learning was found only for vocalic information, irrespective of whether one or two phonological features were changed. The implication of these findings is that the phonological biases found in early lexical processing are not language-general but develop during language acquisition, depending on the phonological or lexical properties of the native language. © 2015 John Wiley & Sons Ltd.
Arcara, Giorgio; Lacaita, Graziano; Mattaloni, Elisa; Passarini, Laura; Mondini, Sara; Benincà, Paola; Semenza, Carlo
The present study is the first neuropsychological investigation into the problem of the mental representation and processing of irreversible binomials (IBs), i.e., word pairs linked by a conjunction (e.g., "hit and run," "dead or alive"). In order to test their lexical status, the phenomenon of neglect dyslexia is explored. People with left-sided neglect dyslexia show a clear lexical effect: they can read IBs better (i.e., by dropping the leftmost words less frequently) when their components are presented in their correct order. This may be taken as an indication that they treat these constructions as lexical, not decomposable, elements. This finding therefore constitutes strong evidence that IBs tend to be stored in the mental lexicon as a whole and that this whole form is preferably addressed in the retrieval process.
Jung, Hyerim; Woo, Young Jae; Kang, Je Wook; Choi, Yeon Woo; Kim, Kyeong Mi
The aim of the present study was to investigate the visual perception difference between ADHD children with and without sensory processing disorder, and the relationship between sensory processing and visual perception of the children with ADHD. Participants were 47 outpatients, aged 6-8 years, diagnosed with ADHD. After excluding those who met exclusion criteria, 38 subjects were clustered into two groups, ADHD children with and without sensory processing disorder (SPD), using SSP reported by their parents, then subjects completed K-DTVP-2. Spearman correlation analysis was run to determine the relationship between sensory processing and visual perception, and Mann-Whitney-U test was conducted to compare the K-DTVP-2 score of two groups respectively. The ADHD children with SPD performed inferiorly to ADHD children without SPD in the on 3 quotients of K-DTVP-2. The GVP of K-DTVP-2 score was related to Movement Sensitivity section (r=0.368(*)) and Low Energy/Weak section of SSP (r=0.369*). The result of the present study suggests that among children with ADHD, the visual perception is lower in those children with co-morbid SPD. Also, visual perception may be related to sensory processing, especially in the reactions of vestibular and proprioceptive senses. Regarding academic performance, it is necessary to consider how sensory processing issues affect visual perception in children with ADHD.
Zhang, Qin; Jiao, Lihua; Cui, Lixia
The phenomenon that concrete words are easier to process than abstract words is referred to as the word concreteness effect. Previous research has investigated influences of semantic context and word emotionality on concreteness effects. It is still unclear whether word concreteness effects might be influenced by emotional context for individuals with different cognitive styles. The present study showed how affective congruency between picture context and word target impacts concreteness effects in the word processing for field-independent and field-dependent individuals using event-related potential measures. The participants evaluated pleasantness of the target word following the presentation of an affective picture. Concrete words were associated with a larger N400 and a smaller late positive component (LPC) than abstract words. Moreover, the LPC concreteness effect occurred only in the affectively incongruent context for field-dependent participants. These findings suggest that emotional context and concreteness modulate the N400 independently, but the LPC concreteness effect is influenced by emotional context and cognitive style.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
The Quinmester course on words gives the student the opportunity to increase his proficiency by investigating word origins, word histories, morphology, and phonology. The course includes the following: dictionary skills and familiarity with the "Oxford,""Webster's Third," and "American Heritage" dictionaries; word…
Petrusel, Razvan; Mendling, Jan; Reijers, Hajo A.
Context Business process models support various stakeholders in managing business processes and designing process-aware information systems. In order to make effective use of these models, they have to be readily understandable. Objective Prior research has emphasized the potential of visual cues to
Pamela Jayne Louise Rae
Full Text Available Glenberg, Schroeder and Robertson (1998 reported that episodic memory is impaired by visual distraction and argued that this effect is consistent with a trade-off between internal and external attentional focus. However, their demonstration that visual distraction impairs memory for lists used 15 consecutive word lists, with analysis only of mid-list items, and has never been replicated. Experiment 1 (N=37 replicated their study, and found no overall effect of distraction on recall for the entire lists. However it did replicate the impairment for mid-list recall. Experiment 2 (N=64 explored whether this pattern arises because the mid-list items are poorly encoded (by manipulating presentation rate or because of interference. Experiment 3 (N=36 also looked at the role of interference whilst controlling for potential item effects. Neither study replicated the pattern seen in Experiment 1, despite reliable effects of presentation rate (Experiment 2 and interference (Experiments 2 and 3. Experiment 2 found no effect of distraction for mid-list items, but distraction did increase both correct and incorrect recall of all items suggestive of a shift in willingness to report. Experiment 3 found no effects of distraction whatsoever. Thus, there is no clear evidence that distraction consistently impairs retrieval of items from lists, contrary to the embodied cognition account used to explain the original finding.
McFarland, James M.; Bondy, Adrian G.; Saunders, Richard C.; Cumming, Bruce G.; Butts, Daniel A.
Saccadic eye movements play a central role in primate vision. Yet, relatively little is known about their effects on the neural processing of visual inputs. Here we examine this question in primary visual cortex (V1) using receptive-field-based models, combined with an experimental design that leaves the retinal stimulus unaffected by saccades. This approach allows us to analyse V1 stimulus processing during saccades with unprecedented detail, revealing robust perisaccadic modulation. In particular, saccades produce biphasic firing rate changes that are composed of divisive gain suppression followed by an additive rate increase. Microsaccades produce similar, though smaller, modulations. We furthermore demonstrate that this modulation is likely inherited from the LGN, and is driven largely by extra-retinal signals. These results establish a foundation for integrating saccades into existing models of visual cortical stimulus processing, and highlight the importance of studying visual neuron function in the context of eye movements. PMID:26370359
Miguel A García-Pérez
Full Text Available Saccadic suppression refers to a reduction in visual sensitivity during saccadic eye movements. This reduction is conventionally regarded as mediated by either of two sources. One is a simple passive process of motion smear during saccades also accompanied by visual masking exerted by high-contrast pre- and post-saccadic images. The other is an active process exerted by a neural mechanism that turns off visual processing so that the perception of a stable visual environment is not disrupted during saccades. Some studies have actually shown that contrast sensitivity is significantly lower during saccades than under fixation, but these experiments were not designed in a way that could weigh the differential contribution of active and passive sources of saccadic suppression. We report the results of measurements of psychometric functions for contrast detection using stimuli that are only visible during saccades, thus effectively isolating any visual processing that actually takes place during the saccades and also preventing any pre- and post-saccadic visual masking. We also report measurements of psychometric functions for detection under fixation for stimuli that are comparable in duration and spatio-temporal characteristics to the intrasaccadic retinal stimulus. Whether during saccades or under fixation, the psychometric functions for detection turned out to be very similar, leaving room only for a small amount of sensitivity reduction during saccades. This suggests that contrast processing is largely unaltered during saccades and, thus, that no neural mechanism seems to be actively involved in saccadic suppression.
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.
Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C
Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.
Fang, Yu; Nakashima, Ryoichi; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing. PMID:25799510
Knab, P.; Pinzger, M.; Gall, H.C.
Software development teams gather valuable data about features and bugs in issue tracking systems. This information can be used to measure and improve the efficiency and effectiveness of the development process. In this paper we present an approach that harnesses the extraordinary capability of the
Jensen, Amy Petersen; Ashworth, Julia
Notes that media shapes the way young people contextualize their world. Suggests that process drama could be a pedagogical forum where theater practitioners and young people could use dramatic tools to explore the form and content of the omnipresent media in its historical, social, political, and personal contexts. Provides examples of what this…
Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E
Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.
Oganian, Yulia; Conrad, Markus; Aryani, Arash; Spalek, Katharina; Heekeren, Hauke R
A crucial aspect of bilingual communication is the ability to identify the language of an input. Yet, the neural and cognitive basis of this ability is largely unknown. Moreover, it cannot be easily incorporated into neuronal models of bilingualism, which posit that bilinguals rely on the same neural substrates for both languages and concurrently activate them even in monolingual settings. Here we hypothesized that bilinguals can employ language-specific sublexical (bigram frequency) and lexical (orthographic neighborhood size) statistics for language recognition. Moreover, we investigated the neural networks representing language-specific statistics and hypothesized that language identity is encoded in distributed activation patterns within these networks. To this end, German-English bilinguals made speeded language decisions on visually presented pseudowords during fMRI. Language attribution followed lexical neighborhood sizes both in first (L1) and second (L2) language. RTs revealed an overall tuning to L1 bigram statistics. Neuroimaging results demonstrated tuning to L1 statistics at sublexical (occipital lobe) and phonological (temporoparietal lobe) levels, whereas neural activation in the angular gyri reflected sensitivity to lexical similarity to both languages. Analysis of distributed activation patterns reflected language attribution as early as in the ventral stream of visual processing. We conclude that in language-ambiguous contexts visual word processing is dominated by L1 statistical structure at sublexical orthographic and phonological levels, whereas lexical search is determined by the structure of both languages. Moreover, our results demonstrate that language identity modulates distributed activation patterns throughout the reading network, providing a key to language identity representations within this shared network.
Full Text Available There are contradicting assumptions and findings on the direction of word stress processing in German. To resolve this question, we asked participants to read tri-syllabic nonwords and stress ambiguous words aloud. Additionally, they also performed a working memory task (2-back task. In nonword reading, participants’ individual working memory capacity was positively correlated with assignment of main stress to the antepenultimate syllable, which is most distant to the word’s right edge, while a (complementary negative correlation was observed with assignment of stress to the ultimate syllable. There was no significant correlation between working memory capacity and stress assignment to the penultimate syllable, which has been claimed to be the default stress pattern in German. In reading stress ambiguous words a similar but non-significant pattern was observed as in nonword reading. In sum, our results provide first psycholinguistic evidence supporting leftward stress processing in German. Our results do not lend support to the assumption of penultimate default stress in German. A specification of the lemma model is proposed which seems able to reconcile our findings and apparently contradicting assumptions and evidence.
Huang, Chen; Ding, Xiaoqing; Chen, Yan
The language model design and implementation issue is researched in this paper. Different from previous research, we want to emphasize the importance of n-gram models based on words in the study of language model. We build up a word based language model using the toolkit of SRILM and implement it for contextual language processing on Chinese documents. A modified Absolute Discount smoothing algorithm is proposed to reduce the perplexity of the language model. The word based language model improves the performance of post-processing of online handwritten character recognition system compared with the character based language model, but it also increases computation and storage cost greatly. Besides quantizing the model data non-uniformly, we design a new tree storage structure to compress the model size, which leads to an increase in searching efficiency as well. We illustrate the set of approaches on a test corpus of recognition results of online handwritten Chinese characters, and propose a modified confidence measure for recognition candidate characters to get their accurate posterior probabilities while reducing the complexity. The weighted combination of linguistic knowledge and candidate confidence information proves successful in this paper and can be further developed to achieve improvements in recognition accuracy.
Wallentin, Mikkel; Michaelsen, Jákup Ludvík Dahl; Rynne, Ian; Nielsen, Rasmus Høll
We investigated whether lateralized BOLD-fMRI activations in Broca's region, Wernicke's region and visual word form area (VWFA) reflect task shift costs and to which extent these effects are specific to language related task shifts. We employed a linguistic one-back memory paradigm where participants (n=58) on each trial responded to whether a given word was the same as the previous word. In concordance with previous findings we found that conceptual shifts (CS), i.e. new words, elicited a strongly left-lateralized response in all three regions compared to repeat words. Words were sometimes presented through the visual modality (read) and sometimes through the auditory modality (spoken). This enabled the study of perceptual modality shifts (PS) relative to trials that stayed in the same modality as the previous trials. Again, we found a strongly left-lateralized effect in all regions. This was independent of whether the word was a CS or not, suggesting that linguistic translation across modalities taxes the same system as CS. Response shifts (RS), on the other hand, when shifting from one response (e.g. reporting a repeat word) to another (e.g. reporting a new word) did not yield an observable left lateralized response in any of the regions, suggesting that the lateralized task shift cost effects in these regions are not shared by all types of task shifts. Lateralization for individual tasks was found to be correlated across brain regions, but not across tasks, suggesting that lateralization may not be a unitary phenomenon, but vary across participants according to task demands. Both response time and lateralization were found to reflect the demands not only of the current trial but also of the previous trial, illustrating the context dependency of even simple cognitive tasks. Copyright © 2014 Elsevier Inc. All rights reserved.
Full Text Available Previous research has mainly considered the impact of tone-language experience on ability to discriminate linguistic pitch, but proficient bilingual listening requires differential processing of sound variation in each language context. Here, we ask whether Mandarin-English bilinguals, for whom pitch indicates word distinctions in one language but not the other, can process pitch differently in a Mandarin context vs. an English context. Across three eye-tracked word-learning experiments, results indicated that tone-intonation bilinguals process tone in accordance with the language context. In Experiment 1, 51 Mandarin-English bilinguals and 26 English speakers without tone experience were taught Mandarin-compatible novel words with tones. Mandarin-English bilinguals out-performed English speakers, and, for bilinguals, overall accuracy was correlated with Mandarin dominance. Experiment 2 taught 24 Mandarin-English bilinguals and 25 English speakers novel words with Mandarin-like tones, but English-like phonemes and phonotactics. The Mandarin-dominance advantages observed in Experiment 1 disappeared when words were English-like. Experiment 3 contrasted Mandarin-like vs. English-like words in a within-subjects design, providing even stronger evidence that bilinguals can process tone language-specifically. Bilinguals (N = 58, regardless of language dominance, attended more to tone than English speakers without Mandarin experience (N = 28, but only when words were Mandarin-like-not when they were English-like. Mandarin-English bilinguals thus tailor tone processing to the within-word language context.
Quam, Carolyn; Creel, Sarah C.
Previous research has mainly considered the impact of tone-language experience on ability to discriminate linguistic pitch, but proficient bilingual listening requires differential processing of sound variation in each language context. Here, we ask whether Mandarin-English bilinguals, for whom pitch indicates word distinctions in one language but not the other, can process pitch differently in a Mandarin context vs. an English context. Across three eye-tracked word-learning experiments, results indicated that tone-intonation bilinguals process tone in accordance with the language context. In Experiment 1, 51 Mandarin-English bilinguals and 26 English speakers without tone experience were taught Mandarin-compatible novel words with tones. Mandarin-English bilinguals out-performed English speakers, and, for bilinguals, overall accuracy was correlated with Mandarin dominance. Experiment 2 taught 24 Mandarin-English bilinguals and 25 English speakers novel words with Mandarin-like tones, but English-like phonemes and phonotactics. The Mandarin-dominance advantages observed in Experiment 1 disappeared when words were English-like. Experiment 3 contrasted Mandarin-like vs. English-like words in a within-subjects design, providing even stronger evidence that bilinguals can process tone language-specifically. Bilinguals (N = 58), regardless of language dominance, attended more to tone than English speakers without Mandarin experience (N = 28), but only when words were Mandarin-like—not when they were English-like. Mandarin-English bilinguals thus tailor tone processing to the within-word language context. PMID:28076400
Quam, Carolyn; Creel, Sarah C
Previous research has mainly considered the impact of tone-language experience on ability to discriminate linguistic pitch, but proficient bilingual listening requires differential processing of sound variation in each language context. Here, we ask whether Mandarin-English bilinguals, for whom pitch indicates word distinctions in one language but not the other, can process pitch differently in a Mandarin context vs. an English context. Across three eye-tracked word-learning experiments, results indicated that tone-intonation bilinguals process tone in accordance with the language context. In Experiment 1, 51 Mandarin-English bilinguals and 26 English speakers without tone experience were taught Mandarin-compatible novel words with tones. Mandarin-English bilinguals out-performed English speakers, and, for bilinguals, overall accuracy was correlated with Mandarin dominance. Experiment 2 taught 24 Mandarin-English bilinguals and 25 English speakers novel words with Mandarin-like tones, but English-like phonemes and phonotactics. The Mandarin-dominance advantages observed in Experiment 1 disappeared when words were English-like. Experiment 3 contrasted Mandarin-like vs. English-like words in a within-subjects design, providing even stronger evidence that bilinguals can process tone language-specifically. Bilinguals (N = 58), regardless of language dominance, attended more to tone than English speakers without Mandarin experience (N = 28), but only when words were Mandarin-like-not when they were English-like. Mandarin-English bilinguals thus tailor tone processing to the within-word language context.
Written in a friendly, Beginner's Guide format, showing the user how to use the digital media aspects of Matlab (image, video, sound) in a practical, tutorial-based style.This is great for novice programmers in any language who would like to use Matlab as a tool for their image and video processing needs, and also comes in handy for photographers or video editors with even less programming experience wanting to find an all-in-one tool for their tasks.
Baayen, R. H.; Feldman, L. B.; Schreuder, R.
Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…
Stefan M Wierda
Full Text Available BACKGROUND: When a second target (T2 is presented in close succession of a first target (T1 within a stream of non-targets, people often fail to detect T2-a deficit known as the attentional blink (AB. Two types of theories can be distinguished that have tried to account for this phenomenon. Whereas attentional-control theories suggest that protection of consolidation processes induces the AB, limited-resource theories claim that the AB is caused by a lack of resources. According to the latter type of theories, increasing difficulty of one or both targets should increase the magnitude of the AB. Similarly, attentional-control theories predict that a difficult T1 increases the AB due to prolonged processing. However, the prediction for T2 is not as straightforward. Prolonged processing of T2 could cause conflicts and increase the AB. However, if consolidation of T2 is postponed without loss of identity, the AB might be attenuated. METHODOLOGY/PRINCIPAL FINDINGS: Participants performed an AB task that consisted of a stream of distractor non-words and two target words. Difficulty of T1 and T2 was manipulated by varying word-frequency. Overall performance for high-frequency words was better than for low-frequency words. When T1 was highly frequent, the AB was reduced. The opposite effect was found for T2. When T2 was highly frequent, performance during the AB period was relatively worse than for a low-frequency T2. A threaded-cognition model of the AB was presented that simulated the observed pattern of behavior by taking changes in the time-course of retrieval and consolidation processes into account. Our results were replicated in a subsequent ERP study. CONCLUSIONS/SIGNIFICANCE: The finding that a difficult low-frequency T2 reduces the magnitude of the AB is at odds with limited-resource accounts of the AB. However, it was successfully accounted for by the threaded-cognition model, thus providing an explanation in terms of attentional control.
Lachmair, Martin; Ruiz Fernandez, Susana; Bury, Nils-Alexander; Gerjets, Peter; Fischer, Martin H; Bock, Otmar L
The aim of the present study was to test the functional relevance of the spatial concepts UP or DOWN for words that use these concepts either literally (space) or metaphorically (time, valence). A functional relevance would imply a symmetrical relationship between the spatial concepts and words related to these concepts, showing that processing words activate the related spatial concepts on one hand, but also that an activation of the concepts will ease the retrieval of a related word on the other. For the latter, the rotation angle of participant's body position was manipulated either to an upright or a head-down tilted body position to activate the related spatial concept. Afterwards participants produced in a within-subject design previously memorized words of the concepts space, time and valence according to the pace of a metronome. All words were related either to the spatial concept UP or DOWN. The results including Bayesian analyses show (1) a significant interaction between body position and words using the concepts UP and DOWN literally, (2) a marginal significant interaction between body position and temporal words and (3) no effect between body position and valence words. However, post-hoc analyses suggest no difference between experiments. Thus, the authors concluded that integrating sensorimotor experiences is indeed of functional relevance for all three concepts of space, time and valence. However, the strength of this functional relevance depends on how close words are linked to mental concepts representing vertical space.
Full Text Available Scaling laws characterize diverse complex systems in a broad range of fields, including physics, biology, finance, and social science. The human language is another example of a complex system of words organization. Studies on written texts have shown that scaling laws characterize the occurrence frequency of words, words rank, and the growth of distinct words with increasing text length. However, these studies have mainly concentrated on the western linguistic systems, and the laws that govern the lexical organization, structure and dynamics of the Chinese language remain not well understood. Here we study a database of Chinese and English language books. We report that three distinct scaling laws characterize words organization in the Chinese language. We find that these scaling laws have different exponents and crossover behaviors compared to English texts, indicating different words organization and dynamics of words in the process of text growth. We propose a stochastic feedback model of words organization and text growth, which successfully accounts for the empirically observed scaling laws with their corresponding scaling exponents and characteristic crossover regimes. Further, by varying key model parameters, we reproduce differences in the organization and scaling laws of words between the Chinese and English language. We also identify functional relationships between model parameters and the empirically observed scaling exponents, thus providing new insights into the words organization and growth dynamics in the Chinese and English language.
CH’s acquired dyslexia and dysgraphia left him with a profound impairment in processing abstract letter identities. This impairment affected his ability to process strings of letters in a variety of tasks; for example nonword reading, spelling, recognizing orally spelled words. However, while impaired, his single word reading was surprisingly good given his single letter impairment, suggesting an additional route to word meaning from visually-presented familiar words that does not require abstract letter identities.
Marc S Tibber
Full Text Available Schizophrenia has been linked to impaired performance on a range of visual processing tasks (e.g. detection of coherent motion and contour detection. It has been proposed that this is due to a general inability to integrate visual information at a global level. To test this theory, we assessed the performance of people with schizophrenia on a battery of tasks designed to probe voluntary averaging in different visual domains. Twenty-three outpatients with schizophrenia (mean age: 40±8 years; 3 female and 20 age-matched control participants (mean age 39±9 years; 3 female performed a motion coherence task and three equivalent noise (averaging tasks, the latter allowing independent quantification of local and global limits on visual processing of motion, orientation and size. All performance measures were indistinguishable between the two groups (ps>0.05, one-way ANCOVAs, with one exception: participants with schizophrenia pooled fewer estimates of local orientation than controls when estimating average orientation (p = 0.01, one-way ANCOVA. These data do not support the notion of a generalised visual integration deficit in schizophrenia. Instead, they suggest that distinct visual dimensions are differentially affected in schizophrenia, with a specific impairment in the integration of visual orientation information.
Tibber, Marc S; Anderson, Elaine J; Bobin, Tracy; Carlin, Patricia; Shergill, Sukhwinder S; Dakin, Steven C
Schizophrenia has been linked to impaired performance on a range of visual processing tasks (e.g. detection of coherent motion and contour detection). It has been proposed that this is due to a general inability to integrate visual information at a global level. To test this theory, we assessed the performance of people with schizophrenia on a battery of tasks designed to probe voluntary averaging in different visual domains. Twenty-three outpatients with schizophrenia (mean age: 40±8 years; 3 female) and 20 age-matched control participants (mean age 39±9 years; 3 female) performed a motion coherence task and three equivalent noise (averaging) tasks, the latter allowing independent quantification of local and global limits on visual processing of motion, orientation and size. All performance measures were indistinguishable between the two groups (ps>0.05, one-way ANCOVAs), with one exception: participants with schizophrenia pooled fewer estimates of local orientation than controls when estimating average orientation (p = 0.01, one-way ANCOVA). These data do not support the notion of a generalised visual integration deficit in schizophrenia. Instead, they suggest that distinct visual dimensions are differentially affected in schizophrenia, with a specific impairment in the integration of visual orientation information.
Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin
The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.
Schlenker, Richard M.
This guide was developed as a "how to" training device for merging database and word processing files using AppleWorks version 2.0 and the Apple IIGS computer with two disk drives. Step-by-step instructions are provided for loading database files, transferring database files to the clipboard, merging database files into word processor…
Schlenker, Richard M.; Schlenker, Deborah S.
This guide was developed as a "how to" training device for merging database and word processing files using Appleworks and the Apple IIe computer with a Duodisk or two disk drives. Step-by-step directions are provided for transferring the database file, printing the file, moving to the word processor file, and merging documents. Also…
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…
Rellecke, Julian; Palazova, Marina; Sommer, Werner; Schacht, Annekathrin
The degree to which emotional aspects of stimuli are processed automatically is controversial. Here, we assessed the automatic elicitation of emotion-related brain potentials (ERPs) to positive, negative, and neutral words and facial expressions in an easy and superficial face-word discrimination task, for which the emotional valence was…
Faust, Miriam; Barak, Ofra; Chiarello, Christine
The present study examined left (LH) and right (RH) hemisphere involvement in discourse processing by testing the ability of each hemisphere to use world knowledge in the form of script contexts for word recognition. Participants made lexical decisions to laterally presented target words preceded by centrally presented script primes (four…
Bacon, AM; Handley, SJ
Recent research has suggested that individuals with dyslexia rely on explicit visuospatial representations for syllogistic reasoning while most non-dyslexics opt for an abstract verbal strategy. This paper investigates the role of visual processes in relational reasoning amongst dyslexic reasoners. Expt 1 presents written and verbal protocol evidence to suggest that reasoners with dyslexia generate detailed representations of relational properties and use these to make a visual comparison of ...
Zeguers, M.H.T.; Snellings, P.; Tijms, J.; Weeda, W.D.; Tamboer, P.; Bexkens, A.; Huizenga, H.M.
The nature of word recognition difficulties in developmental dyslexia is still a topic of controversy. We investigated the contribution of phonological processing deficits and uncertainty to the word recognition difficulties of dyslexic children by mathematical diffusion modeling of visual and
Felix R. Dreyer
Full Text Available Neuroimaging and neuropsychological experiments suggest that modality-preferential cortices, including motor- and somatosensory areas contribute to the semantic processing of action related concrete words. In contrast, a possible role of modality-preferential – including sensorimotor – areas in processing abstract meaning remains under debate. However, recent fMRI studies indicate an involvement of the left sensorimotor cortex in the processing of abstract-emotional words (e.g. love. But are these areas indeed necessary for processing action-related and abstract words? The current study now investigates word processing in two patients suffering from focal brain lesion in the left frontocentral motor system. A speeded lexical decision task (LDT on meticulously matched word groups showed that the recognition of nouns from different semantic categories – related to food, animals, tools and abstract-emotional concepts – was differentially affected. Whereas patient HS with a lesion in dorsolateral central sensorimotor cortex next to the hand area showed a category-specific deficit in recognizing tool words, patient CA suffering from lesion centered in the left SMA was primarily impaired in abstract-emotional word processing. These results point to a causal role of the motor cortex in the semantic processing of both action-related object concepts and abstract-emotional concepts and therefore suggest that the motor areas previously found active in action-related and abstract word processing can serve a meaning-specific necessary role in word recognition. The category-specific nature of the observed dissociations is difficult to reconcile with the idea that sensorimotor systems are somehow peripheral or ‘epiphenomenal’ to meaning and concept processing. Rather, our results are consistent with the claim that cognition is grounded in action and perception and based on distributed action perception circuits reaching into sensorimotor areas.
Lam, K.J.Y.; Dijkstra, A.F.J.; Rüschemeyer, S.A.
Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain
Kennedy, Tay; Thomas, David G.; Woltamo, Tesfaye; Abebe, Yewelsew; Hubbs-Tait, Laura; Sykova, Vladimira; Stoecker, Barbara J.; Hambidge, K. Michael
Speed of information processing and recognition memory can be assessed in infants using a visual information processing (VIP) paradigm. In a sample of 100 infants 6-8 months of age from Southern Ethiopia, we assessed relations between growth and VIP. The 69 infants who completed the VIP protocol had a mean weight z score of -1.12 plus or minus…
N1. "I’be devc1opment of a computer vision system," Recherche ’siclogica (1979),. Irad) .1. Mike "Special issue on computer vision (ed.)," Arificial ...development of theories of reading and theories of vision in Artificial Intelligence. We propose to exploit and extend recent results inaComputer... Vision to develop an improved model of early processing in reading. This first paper considers the problem of isolating words in text based on the
Meppelink, Anne Marthe; de Jong, Bauke M; Renken, Remco; Leenders, Klaus L; Cornelissen, Frans W; van Laar, Teus
Impaired visual processing may play a role in the pathophysiology of visual hallucinations in Parkinson's disease. In order to study involved neuronal circuitry, we assessed cerebral activation patterns both before and during recognition of gradually revealed images in Parkinson's disease patients with visual hallucinations (PDwithVHs), Parkinson's disease patients without visual hallucinations (PDnonVHs) and healthy controls. We hypothesized that, before image recognition, PDwithVHs would show reduced bottom-up visual activation in occipital-temporal areas and increased (pre)frontal activation, reflecting increased top-down demand. Overshoot of the latter has been proposed to play a role in generating visual hallucinations. Nine non-demented PDwithVHs, 14 PDnonVHs and 13 healthy controls were scanned on a 3 Tesla magnetic resonance imaging scanner. Static images of animals and objects gradually appearing out of random visual noise were used in an event-related design paradigm. Analyses were time-locked on the moment of image recognition, indicated by the subjects' button-press. Subjects were asked to press an additional button on a colour-changing fixation dot, to keep attention and motor action constant and to assess reaction times. Data pre-processing and statistical analysis were performed with statistical parametric mapping-5 software. Bilateral activation of the fusiform and lingual gyri was seen during image recognition in all groups (P top-down frontal activation was not obtained. The finding of activation reductions in ventral/lateral visual association cortices in PDwithVHs before image recognition further helps to explain functional mechanisms underlying visual hallucinations in Parkinson's disease.
Choi, Wonil; Gordon, Peter C
Two experiments examined how lexical status affects the targeting of saccades during reading by using the boundary technique to vary independently the content of a letter string when seen in parafoveal preview and when directly fixated. Experiment 1 measured the skipping rate for a target word embedded in a sentence under three parafoveal preview conditions: full preview (e.g., brain-brain), pseudohomophone preview (e.g., brane-brain), and orthographic nonword control preview (e.g., brant-brain); in the first condition, the preview string was always an English word, while in the second and third conditions, it was always a nonword. Experiment 2 investigated three conditions where the preview string was always a word: full preview (e.g., beach-beach), homophone preview (e.g., beech-beach), and orthographic control preview (e.g., bench-beach). None of the letter string manipulations used to create the preview conditions in the experiments disrupted sublexical orthographic or phonological patterns. In Experiment 1, higher skipping rates were observed for the full (lexical) preview condition, which consisted of a word, than for the nonword preview conditions (pseudohomophone and orthographic control). In contrast, Experiment 2 showed no difference in skipping rates across the three types of lexical preview conditions (full, homophone, and orthographic control), although preview type did influence reading times. This pattern indicates that skipping not only depends on the presence of disrupted sublexical patterns of orthography or phonology, but also is critically dependent on processes that are sensitive to the lexical status of letter strings in the parafovea.
de Jong, Maartje C; Brascamp, Jan W; Kemner, Chantal; van Ee, Raymond; Verstraten, Frans A J
The way we perceive the present visual environment is influenced by past visual experiences. Here we investigated the neural basis of such experience dependency. We repeatedly presented human observers with an ambiguous visual stimulus (structure-from-motion) that can give rise to two distinct perceptual interpretations. Past visual experience is known to influence the perception of such stimuli. We recorded fast dynamics of neural activity shortly after stimulus onset using event-related electroencephalography. The number of previous occurrences of a certain percept modulated early posterior brain activity starting as early as 50 ms after stimulus onset. This modulation developed across hundreds of percept repetitions, reflecting several minutes of accumulating perceptual experience. Importantly, there was no such modulation when the mere number of previous stimulus presentations was considered regardless of how they were perceived. This indicates that the effect depended on previous perception rather than previous visual input. The short latency and posterior scalp location of the effect suggest that perceptual history modified bottom-up stimulus processing in early visual cortex. We propose that bottom-up neural responses to a given visual presentation are shaped, in part, by feedback modulation that occurred during previous presentations, thus allowing these responses to be biased in light of previous perceptual decisions. Copyright © 2014 the authors 0270-6474/14/349970-12$15.00/0.
Smerbeck, A M; Parrish, J; Serafin, D; Yeh, E A; Weinstock-Guttman, B; Hoogs, M; Krupp, L B; Benedict, R H B
Children with multiple sclerosis (MS) can suffer significant cognitive deficits. This study investigates the sensitivity and validity in pediatric MS of two visual processing tests borrowed from the adult literature, the Brief Visuospatial Memory Test-Revised (BVMTR) and the Symbol Digit Modalities Test (SDMT). To test the hypothesis that visual processing is disproportionately impacted in pediatric MS by comparing performance with that of healthy controls on the BVMTR and SDMT. We studied 88 participants (43 MS, 45 controls) using a neuropsychological assessment battery including measures of intelligence, language, visual memory, and processing speed. Patients and demographically matched controls were compared to determine which tests are most sensitive in pediatric MS. Statistically significant differences were found between the MS and control groups on BVMTR Total Learning (t (84) = 4.04, p adolescents with MS.
Sebastiani, Laura; Castellani, Eleonora; Gemignani, Angelo; Artoni, Fiorenzo; Menicucci, Danilo
Priming is an implicit memory effect in which previous exposure to one stimulus influences the response to another stimulus. The main characteristic of priming is that it occurs without awareness. Priming takes place also when the physical attributes of previously studied and test stimuli do not match; in fact, it greatly refers to a general stimulus representation activated at encoding independently of the sensory modality engaged. Our aim was to evaluate whether, in a cross-modal word-stem completion task, negative priming scores could depend on inefficient word processing at study and therefore on an altered stimulus representation. Words were presented in the auditory modality, and word-stems to be completed in the visual modality. At study, we recorded auditory ERPs, and compared the P300 (attention/memory) and N400 (meaning processing) of individuals with positive and negative priming. Besides classical averaging-based ERPs analysis, we used an ICA-based method (ErpICASSO) to separate the potentials related to different processes contributing to ERPs. Classical analysis yielded significant difference between the two waves across the whole scalp. ErpICASSO allowed separating the novelty-related P3a and the top-down control-related P3b sub-components of P300. Specifically, in the component C3, the positive deflection identifiable as P3b, was significantly greater in the positive than in the negative priming group, while the late negative deflection corresponding to the parietal N400, was reduced in the positive priming group. In conclusion, inadequacy of specific processes at encoding, such as attention and/or meaning retrieval, could generate weak semantic representations, making words less accessible in subsequent implicit retrieval. Copyright © 2015 Elsevier B.V. All rights reserved.
Jacobs, Christianne; de Graaf, Tom A; Sack, Alexander T
Neuroscience research has conventionally focused on how the brain processes sensory information, after the information has been received. Recently, increased interest focuses on how the state of the brain upon receiving inputs determines and biases their subsequent processing and interpretation. Here, we investigated such 'pre-stimulus' brain mechanisms and their relevance for objective and subjective visual processing. Using non-invasive focal brain stimulation [transcranial magnetic stimulation (TMS)] we disrupted spontaneous brain state activity within early visual cortex (EVC) before onset of visual stimulation, at two different pre-stimulus-onset-asynchronies (pSOAs). We found that TMS pulses applied to EVC at either 20 msec or 50 msec before onset of a simple orientation stimulus both prevented this stimulus from reaching visual awareness. Interestingly, only the TMS-induced visual suppression following TMS at a pSOA of ?20 msec was retinotopically specific, while TMS at a pSOA of ?50 msec was not. In a second experiment, we used more complex symbolic arrow stimuli, and found TMS-induced suppression only when disrupting EVC at a pSOA of ? ?60 msec, which, in line with Experiment 1, was not retinotopically specific. Despite this topographic unspecificity of the ?50 msec effect, the additional control measurements as well as tracking and removal of eye blinks, suggested that also this effect was not the result of an unspecific artifact, and thus neural in origin. We therefore obtained evidence of two distinct neural mechanisms taking place in EVC, both determining whether or not subsequent visual inputs are successfully processed by the human visual system.
Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol
Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.
Balslev, Daniela; Miall, R Chris; Cole, Jonathan
. Correlation analyses suggested that reaction time was influenced by the size of the visual error rather than the visuo-proprioceptive conflict or the variance in cursor position. We suggest that during movements intact proprioception is necessary for the rapid processing of visual feedback.......During visually guided movements both vision and proprioception inform the brain about the position of the hand, so interaction between these two modalities is presumed. Current theories suggest that this interaction occurs by sensory information from both sources being fused into a more reliable...... was compared under conditions with normal and reduced proprioception after 1-Hz rTMS over the hand-contralateral somatosensory cortex. Proprioceptive deafferentation slowed down the reaction time for initiating a motor correction in response to a visual perturbation in hand position, but not to a target jump...
Calvo, Manuel G; Meseguer, Enrique
The independent and the combined influence of word length, word frequency, and contextual predictability on eye movements in reading was examined across processing stages under two priming-context conditions...
Full Text Available The present study aimed at distinguishing processing of early learned L2 words from late ones for Chinese natives who learn English as a foreign language. Specifically, we examined whether the age of acquisition (AoA effect arose during the arbitrary mapping from conceptual knowledge onto linguistic units. The behavior and ERP data were collected when 28 Chinese-English bilinguals were asked to perform semantic relatedness judgment on word pairs, which represented three stages of word learning (i.e., primary school, junior and senior high schools. A 3 (AoA: early vs. intermediate vs. late × 2 (regularity: regular vs. irregular × 2 (semantic relatedness: related vs. unrelated × 2 (hemisphere: left vs. right × 3 (brain area: anterior vs. central vs. posterior within-subjects design was adopted. Results from the analysis of N100 and N400 amplitudes showed that early learned words had an advantage in processing accuracy and speed; there is a tendency that the AoA effect was more pronounced for irregular word pairs and in the semantic related condition. More important, ERP results showed early acquired words induced larger N100 amplitudes for early AoA words in the parietal area and more negative-going N400 than late acquire words in the frontal and central regions. The results indicate the locus of the AoA effect might derive from the arbitrary mapping between word forms and semantic concepts, and early acquired words have more semantic interconnections than late acquired words.
Xue, Jin; Liu, Tongtong; Marmolejo-Ramos, Fernando; Pei, Xuna
The present study aimed at distinguishing processing of early learned L2 words from late ones for Chinese natives who learn English as a foreign language. Specifically, we examined whether the age of acquisition (AoA) effect arose during the arbitrary mapping from conceptual knowledge onto linguistic units. The behavior and ERP data were collected when 28 Chinese-English bilinguals were asked to perform semantic relatedness judgment on word pairs, which represented three stages of word learning (i.e., primary school, junior and senior high schools). A 3 (AoA: early vs. intermediate vs. late) × 2 (regularity: regular vs. irregular) × 2 (semantic relatedness: related vs. unrelated) × 2 (hemisphere: left vs. right) × 3 (brain area: anterior vs. central vs. posterior) within-subjects design was adopted. Results from the analysis of N100 and N400 amplitudes showed that early learned words had an advantage in processing accuracy and speed; there is a tendency that the AoA effect was more pronounced for irregular word pairs and in the semantic related condition. More important, ERP results showed early acquired words induced larger N100 amplitudes for early AoA words in the parietal area and more negative-going N400 than late acquire words in the frontal and central regions. The results indicate the locus of the AoA effect might derive from the arbitrary mapping between word forms and semantic concepts, and early acquired words have more semantic interconnections than late acquired words. PMID:28572785
Panigrahi, Pradipta Kumar
Imaging Heat and Mass Transfer Processes: Visualization and Analysis applies Schlieren and shadowgraph techniques to complex heat and mass transfer processes. Several applications are considered where thermal and concentration fields play a central role. These include vortex shedding and suppression from stationary and oscillating bluff bodies such as cylinders, convection around crystals growing from solution, and buoyant jets. Many of these processes are unsteady and three dimensional. The interpretation and analysis of images recorded are discussed in the text.
There have been many studies regarding the effectiveness of visual aids that go beyond that of static illustrations. Many of these have been concentrated on the effectiveness of visual aids such as animations and models or even non-traditional visual aid activities like role-playing activities. This study focuses on the effectiveness of three different types of visual aids: models, animation, and a role-playing activity. Students used a modeling kit made of Styrofoam balls and toothpicks to construct nucleotides and then bond nucleotides together to form DNA. Next, students created their own animation to depict the processes of DNA replication, transcription, and translation. Finally, students worked in teams to build proteins while acting out the process of translation. Students were given a pre- and post-test that measured their knowledge and comprehension of the four topics mentioned above. Results show that there was a significant gain in the post-test scores when compared to the pre-test scores. This indicates that the incorporated visual aids were effective methods for teaching DNA structure and processes.
Stefanics, Gábor; Csukly, Gábor; Komlósi, Sarolta; Czobor, Pál; Czigler, István
Facial emotions express our internal states and are fundamental in social interactions. Here we explore whether the repetition of unattended facial emotions builds up a predictive representation of frequently encountered emotions in the visual system. Participants (n=24) were presented peripherally with facial stimuli expressing emotions while they performed a visual detection task presented in the center of the visual field. Facial stimuli consisted of four faces of different identity, but expressed the same emotion (happy or fearful). Facial stimuli were presented in blocks of oddball sequence (standard emotion: p=0.9, deviant emotion: p=0.1). Event-related potentials (ERPs) to the same emotions were compared when the emotions were deviant and standard, respectively. We found visual mismatch negativity (vMMN) responses to unattended deviant emotions in the 170-360 ms post-stimulus range over bilateral occipito-temporal sites. Our results demonstrate that information about the emotional content of unattended faces presented at the periphery of the visual field is rapidly processed and stored in a predictive memory representation by the visual system. We also found evidence that differential processing of deviant fearful faces starts already at 70-120 ms after stimulus onset. This finding shows a 'negativity bias' under unattended conditions. Differential processing of fearful deviants were more pronounced in the right hemisphere in the 195-275 ms and 360-390 ms intervals, whereas processing of happy deviants evoked larger differential response in the left hemisphere in the 360-390 ms range, indicating differential hemispheric specialization for automatic processing of positive and negative affect. Copyright © 2011 Elsevier Inc. All rights reserved.
Revonsuo, A; Portin, R; Juottonen, K; Rinne, J O
Patients suffering from Alzheimer's disease (AD) have severe difficulties in tasks requiring the use of semantic knowledge. The semantic deficits associated with AD have been extensively studied by using behavioral methods. Many of these studies indicate that AD patients have a general deficit in voluntary access to semantic representations but that the structure of the representations themselves might be preserved. However, several studies also provide evidence that to some extent semantic representations in AD may in fact be degraded. Recently, a few studies have utilized event-related brain potentials (ERPs) that are sensitive to semantic factors in order to investigate the electrophysiological correlates of the semantic impairment in AD. Interest has focused on the N400 component, which is known to reflect the on-line semantic processing of linguistic and pictorial stimuli. The results from studies of N400 changes in AD remain somewhat controversial: Some studies report normal or enlarged N400 components in AD, whereas others report diminished ones. One issue not reported in previous studies is whether word-elicited ERPs other than N400 remain normal in AD. In the present study our aim was to find out whether the ERP waveforms N1, P2, N400, and Late Positive Component (LPC) to semantically congruous and incongruous spoken words are abnormal in AD and whether such abnormalities specifically reflect deficiencies in semantic activation in AD. Auditory ERPs from 20 scalp sites to semantically congruous and incongruous final words in spoken sentences were recorded from 17 healthy elderly adults and 9 AD patients. The early ERP waveforms N1 and P2 were relatively normal for the AD patients, but the N400 and LPC effects (amplitude difference between congruous and incongruous conditions) were significantly reduced. We interpret the present results as showing that semantic-conceptual activation and other high-level integration processes are defective in AD. However, a
Rozanova, Olga I; Shchuko, Andrey G; Mischenko, Tatyana S
The accommodation has considerable interactions with the pupil response, vergence response and binocularity. The transformation of visual reception processing and the changes of the binocular cooperation during the presbyopia development are still poorly studied. So, the regularities of visual system violation in the presbyopia formation need to be characterized. This study aims to reveal the transformation of visual reception processing and to determine the role of disturbances in binocular interactions in presbyopia formation. This study included 60 people with emmetropic refraction, uncorrected distance visual acuity 1.0 or higher (decimal scale), normal color perception, without concomitant ophthalmopathology. The first group consisted of 30 people (from 18 to 27 years old) without presbyopia, the second cohort comprised 30 patients (from 45 to 55 years old) with presbyopia. The eyeball anatomy and optics were evaluated using ultrasound biomicroscopy, aberrometry, and pupillometry. The functional state of the visual system was investigated under monocular and binocular conditions. The registration of the disparate fusional reflex limits was performed by the original technic using a diploptic device which facilitated investigation of the binocular interaction under natural conditions without the accommodation response, but with the different vergence load. The disparate fusional reflex was analyzed using the proximal and distal fusion borders, and the convergence and divergence fusion borders. The calculation of the area of binocularity field was performed in cm 2 . The presbyopia formation is characterized by a change in an intraocular anatomy, optics, visual processing, and binocularity. The processes of binocular interaction inhibition make a significant contribution to the misalignment of the visual perception. The modification of the proximal, distal and convergence fusion borders was determined. It was revealed that 87% of the presbyopic patients had
Full Text Available The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional Magnetic Resonance Imaging (fMRI, has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad-hoc spelling-related out-scanner tests: a High Spelling Skills group (HSS and a Low Spelling Skills group (LSS. During the fMRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task. Regions of Interest (ROIs and their signal values were obtained for both tasks. Based on these values, SEMs (Structural Equation Models were obtained for each group of spelling competence (HSS and LSS and task through ML (Maximum Likelihood estimation, and the model with the best fit was chosen in each case. Likewise, DCM (Dynamic Causal Models were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages.
Full Text Available Most studies on spelling processes suppose that the activation of orthographic representations is over before we start to write. The goal of the present study was to provide evidence indicating that the orthographic representations activated during spelling production interact continuously with the motor processes during movement production. We manipulated gemination to assess the influence of the orthographic properties of words on the kinematic parameters of production. Native English-speaking participants wrote words containing double letters and control words on a digitizer (e.g., DISSIPATE (Geminate and DISGRACE (Control. The word pairs shared the initial letters and differed on the presence of a doublet at the same position. The results revealed that latencies were shorter for Geminates than Controls, indicating that spelling processes were facilitated by the presence of a doublet in the word. Critically, the impact of letter doubling was also observed during production, with shorter letter durations (e.g., D, I, S and intervals (DI, IS for Geminates than Controls. Letter doubling therefore affected the whole process of word writing: from spelling recall to movement preparation and production. The spelling processes that were involved before movement initiation cascaded into processes that regulate movement execution. The activation spread onto peripheral processing until the production of the doublet was completely programmed (e.g., interval IS.
Kandel, Sonia; Peereman, Ronald; Ghimenton, Anna
Most studies on spelling processes suppose that the activation of orthographic representations is over before we start to write. The goal of the present study was to provide evidence indicating that the orthographic representations activated during spelling production interact continuously with the motor processes during movement production. We manipulated gemination to assess the influence of the orthographic properties of words on the kinematic parameters of production. Native English-speaking participants wrote words containing double letters and control words on a digitizer [e.g., DISSIPATE (Geminate) and DISGRACE (Control)]. The word pairs shared the initial letters and differed on the presence of a doublet at the same position. The results revealed that latencies were shorter for Geminates than Controls, indicating that spelling processes were facilitated by the presence of a doublet in the word. Critically, the impact of letter doubling was also observed during production, with shorter letter durations (e.g., D, I, S) and intervals (DI, IS) for Geminates than Controls. Letter doubling therefore affected the whole process of word writing: from spelling recall to movement preparation and production. The spelling processes that were involved before movement initiation cascaded into processes that regulate movement execution. The activation spread onto peripheral processing until the production of the doublet was completely programmed (e.g., letter S).
Full Text Available We report performance measures for lexical decision, word naming, and progressive demasking for a large sample of monosyllabic, monomorphemic French words (N = 1,482. We compare the tasks and also examine the impact of word length, word frequency, initial phoneme, orthographic and phonological distance to neighbors, age-of-acquisition, and subjective frequency. Our results show that objective word frequency is by far the most important variable to predict reaction times in lexical decision. For word naming, it is the first phoneme. Progressive demasking was more influenced by a semantic variable (word imageability than lexical decision, but was also affected to a much greater extent by perceptual variables (word length, first phoneme/letters. This may reduce its usefulness as a psycholinguistic word recognition task.
Ferrand, Ludovic; Brysbaert, Marc; Keuleers, Emmanuel; New, Boris; Bonin, Patrick; Méot, Alain; Augustinova, Maria; Pallier, Christophe
We report performance measures for lexical decision (LD), word naming (NMG), and progressive demasking (PDM) for a large sample of monosyllabic monomorphemic French words (N = 1,482). We compare the tasks and also examine the impact of word length, word frequency, initial phoneme, orthographic and phonological distance to neighbors, age-of-acquisition, and subjective frequency. Our results show that objective word frequency is by far the most important variable to predict reaction times in LD. For word naming, it is the first phoneme. PDM was more influenced by a semantic variable (word imageability) than LD, but was also affected to a much greater extent by perceptual variables (word length, first phoneme/letters). This may reduce its usefulness as a psycholinguistic word recognition task.
Full Text Available Reading is one of the most popular leisure activities and it is routinely performed by most individuals even in old age. Successful reading enables older people to master and actively participate in everyday life and maintain functional independence. Yet, reading comprises a multitude of subprocesses and it is undoubtedly one of the most complex accomplishments of the human brain. Not surprisingly, findings of age-related effects on word recognition and reading have been partly contradictory and are often confined to only one of four central reading subprocesses, i.e., sublexical, orthographic, phonological and lexico-semantic processing. The aim of the present study was therefore to systematically investigate the impact of age on each of these subprocesses. A total of 1,807 participants (young, N = 384; old, N = 1,423 performed four decision tasks specifically designed to tap one of the subprocesses. To account for the behavioral heterogeneity in older adults, this subsample was split into high and low performing readers. Data were analyzed using a hierarchical diffusion modelling approach which provides more information than standard response times/accuracy analyses. Taking into account incorrect and correct response times, their distributions and accuracy data, hierarchical diffusion modelling allowed us to differentiate between age-related changes in decision threshold, non-decision time and the speed of information uptake. We observed longer non-decision times for older adults and a more conservative decision threshold. More importantly, high-performing older readers outperformed younger adults at the speed of information uptake in orthographic and lexico-semantic processing whereas a general age-disadvantage was observed at the sublexical and phonological levels. Low-performing older readers were slowest in information uptake in all four subprocesses. Discussing these results in terms of computational models of word recognition, we propose
Gonçalves, Oscar F; Marques, Tiago Reis; Lori, Nicolás F; Sampaio, Adriana; Branco, Miguel Castelo
OCD has been hypothesized to involve the failures in both cognitive and behavioral inhibitory processes. There is evidence that the hyperactivation of cortical-subcortical pathways may be involved in the failure of these inhibitory systems associated with OCD. Despite this consensus on the role of frontal-subcortical pathways in OCD, recent studies have been showing that brain regions other than the frontal-subcortical loops may be needed to understand the different cognitive and emotional deficits in OCD. Some studies have been finding evidence for decreased metabolic activity in areas such as left inferior parietal and parieto-occipital junction suggesting the possible existence of visual processing deficits. While there has been inconsistent data regarding visual processing in OCD, recent studies have been claiming that these patients have abnormal patterns of visual processing social rich stimuli, particularly emotional arousing stimuli. Thus, in this article, we hypothesize that the fronto-subcortical activation consistently found in OCD may be due to a deactivation of occipital/parietal regions associated with visual-perceptual processing of incoming social rich stimuli. Additionally, this dissociation may be more evident as the emotional intensity of the social stimulus increases.
Full Text Available Most of raw materials of small hardware processing for plate scraps, and it’s realized through the manual operation of ordinary punch, which way has the low production efficiency and the high labor intensity. In order to improve the automation level of production, developing and designing of a visual processing system for punch press manipulator which based on the MFC tools of Visual Studio software platform. Through the image acquisition and image processing, get the information about the board to be processed, such as shape, length, the center of gravity position and pose, and providing relevant parameters for positioning gripping and placing into the punch table positioning of the feeding manipulator and automatic programming of punching machine, so as to realize the automatic operation about press feeding and processing.
Rossi, Eleonora; Diaz, Michele; Kroll, Judith F.; Dussias, Paola E.
In two self-paced reading experiments we asked whether late, highly proficient, English–Spanish bilinguals are able to process language-specific morpho-syntactic information in their second language (L2). The processing of Spanish clitic pronouns’ word order was tested in two sentential constructions. Experiment 1 showed that English–Spanish bilinguals performed similarly to Spanish–English bilinguals and revealed sensitivity to word order violations for a grammatical structure unique to the ...
Zahn, Roland; Huber, Walter; Drews, Eva; Specht, Karsten; Kemeny, Stefan; Reith, Wolfgang; Willmes, Klaus; Schwarz, Michael
In a previous functional magnetic resonance imaging (fMRI) study with normal subjects, we demonstrated regions related to conceptual-semantic word processing around the first frontal sulcus (BA 9) and the posterior parietal lobe (BA 7/40) in agreement with several previous reports. We had the possibility, using the same fMRI paradigm, to study two consecutive cases with left middle cerebral artery (MCA) infarction (RC and HP) and lesions affecting either solely the pre-frontal (HP) or both the pre-frontal and posterior parietal part of the network activated in normal subjects (RC). Both patients showed transcortical sensory aphasia (TSA) on acute assessment. This contradicts classical disconnection accounts of the syndrome stating intact conceptual representations in TSA. Their recovery of language comprehension was associated with activation of a left hemispheric network. Mainly activations of left perilesional pre-frontal regions (RC), left Wernicke's area (RC and HP) or the left posterior middle and inferior temporal cortex (HP) were demonstrated in the TSA patients. The latter findings suggest that in our cases of TSA functional take-over has occurred in regions with related functions ('redundancy recovery') rather than in previously unrelated areas ('vicarious functioning'). Our data support distributed models of conceptual-semantic word processing and multiple left hemispheric representations of closely related functions.
Full Text Available Background: Amusia is a disorder that is known to affect the processing of musical pitch. Although individuals with amusia rarely show language deficits in daily life, a number of findings point to possible impairments in speech prosody that amusic perceivers may compensate for by drawing on linguistic information. Using EEG, we investigated (1 whether the processing of speech prosody is impaired in amusia and (2 whether emotional linguistic information can ease this process. Method: Twenty Chinese amusics and 22 matched controls were presented pairs of emotional words spoken with either statement or question intonation while their EEG was recorded. Their task was to judge whether the intonations were the same. Results: Emotional linguistic information did not facilitate amusics’ performance on the intonation-matching task, as their performance was significantly worse than that of controls. EEG results showed a reduced N2 response to incongruent intonation pairs in amusics compared with controls, which likely reflects impaired conflict processing in amusia. However, at an earlier processing stage, our EEG results indicate that amusics were intact in early sensory auditory processing, as revealed by a comparable N1 modulation in both groups. Conclusion: We propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This, in turn, could reflect a disconnection between low-level and high-level processing.
Perez, Dorine Vergilino; Lemoine, Christelle; Sieroff, Eric; Ergis, Anne-Marie; Bouhired, Redha; Rigault, Emilie; Dore-Mazars, Karine
Words presented to the right visual field (RVF) are recognized more readily than those presented to the left visual field (LVF). Whereas the attentional bias theory proposes an explanation in terms of attentional imbalance between visual fields, the attentional advantage theory assumes that words presented to the RVF are processed automatically…
Sand, Katrine; Habekost, Thomas; Petersen, Anders
Abstract Pure alexia is a selective deficit in reading, which arises following damage to the left ventral occipito-temporal cortex. Crowding, the inability to recognise objects in a clutter, has recently been hypothesised to be the underlying deficit of apperceptive visual agnosia1. Crowding...... normally occurs in peripheral vision, and we therefore tested whether the performance with words at the centre of fixation in a pure alexic patient (LK) is indeed similar to the performance of matched controls in the peripheral visual field. Using an accuracy-based word recognition task with brief, masked...... exposures, we tested word processing in LK and 24 matched controls. LK was tested in central vision, while the controls were tested in central vision and 4.6 degrees to the right of fixation. LK was significantly impaired on visual word recognition in the central visual field but there was no significant...
Müller, Thomas; Knoll, Alois
Early visual processing as a method to speed up computations on visual input data has long been discussed in the computer vision community. The general target of a such approaches is to filter nonrelevant information from the costly higher-level visual processing algorithms. By insertion of this additional filter layer the overall approach can be speeded up without actually changing the visual processing methodology. Being inspired by the layered architecture of the human visual processing apparatus, several approaches for early visual processing have been recently proposed. Most promising in this field is the extraction of a saliency map to determine regions of current attention in the visual field. Such saliency can be computed in a bottom-up manner, i.e. the theory claims that static regions of attention emerge from a certain color footprint, and dynamic regions of attention emerge from connected blobs of textures moving in a uniform way in the visual field. Top-down saliency effects are either unconscious through inherent mechanisms like inhibition-of-return, i.e. within a period of time the attention level paid to a certain region automatically decreases if the properties of that region do not change, or volitional through cognitive feedback, e.g. if an object moves consistently in the visual field. These bottom-up and top-down saliency effects have been implemented and evaluated in a previous computer vision system for the project JAST. In this paper an extension applying evolutionary processes is proposed. The prior vision system utilized multiple threads to analyze the regions of attention delivered from the early processing mechanism. Here, in addition, multiple saliency units are used to produce these regions of attention. All of these saliency units have different parameter-sets. The idea is to let the population of saliency units create regions of attention, then evaluate the results with cognitive feedback and finally apply the genetic mechanism
Lecture Notes in Artificial Intelligence 1889. Berlin: Springer. Nonaka , I. & Takeuchi . H. (1995). The Knowledge-Creating Company. New York: Oxford...organizational sense-making (c.f., Weick, 1995; Weick, & Sutcliffe, 2001; Choo, 1998; Klein, Phillips, Rall, & Peluso (In Preparation), and Nonaka ... Takeuchi 1995.) 21 Information Overall Visualization Process Environment Individual ( Interpretation , Collaborative Negotiated Negotiated Engagement
Paz-Baruch, Nurit; Leikin, Roza; Leikin, Mark
Little empirical data are available concerning the cognitive abilities of gifted individuals in general and especially those who excel in mathematics. We examined visual processing abilities distinguishing between general giftedness (G) and excellence in mathematics (EM). The research population consisted of 190 students from four groups of 10th-…
Robotham, Julia Emma; Lindegaard, Martin Weis; Delfi, Tzvetelina Shentova
unilateral lesions, we found no patient with a selective deficit in either reading or face processing. Rather, the patients showing a deficit in processing either words or faces were also impaired with the other category. One patient performed within the normal range on all tasks. In addition, all patients......It has long been argued that perceptual processing of faces and words is largely independent, highly specialised and strongly lateralised. Studies of patients with either pure alexia or prosopagnosia have strongly contributed to this view. The aim of our study was to investigate how visual...... perception of faces and words is affected by unilateral posterior stroke. Two patients with lesions in their dominant hemisphere and two with lesions in their non-dominant hemisphere were tested on sensitive tests of face and word perception during the stable phase of recovery. Despite all patients having...
Fields, Eric C; Kuperberg, Gina R
We used event-related potentials (ERPs) to examine the interactions between task, emotion, and contextual self-relevance on processing words in social vignettes. Participants read scenarios that were in either third person (other-relevant) or second person (self-relevant) and we recorded ERPs to a neutral, pleasant, or unpleasant critical word. In a previously reported study (Fields and Kuperberg, 2012) with these stimuli, participants were tasked with producing a third sentence continuing the scenario. We observed a larger LPC to emotional words than neutral words in both the self-relevant and other-relevant scenarios, but this effect was smaller in the self-relevant scenarios because the LPC was larger on the neutral words (i.e., a larger LPC to self-relevant than other-relevant neutral words). In the present work, participants simply answered comprehension questions that did not refer to the emotional aspects of the scenario. Here we observed quite a different pattern of interaction between self-relevance and emotion: the LPC was larger to emotional vs. neutral words in the self-relevant scenarios only, and there was no effect of self-relevance on neutral words. Taken together, these findings suggest that the LPC reflects a dynamic interaction between specific task demands, the emotional properties of a stimulus, and contextual self-relevance. We conclude by discussing implications and future directions for a functional theory of the emotional LPC.
Eric C. Fields
Full Text Available We used event-related potentials (ERPs to examine the interactions between task, emotion, and contextual self-relevance on processing words in social vignettes. Participants read scenarios that were in either third person (other-relevant or second person (self-relevant and we recorded ERPs to a neutral, pleasant, or unpleasant critical word. In a previously reported study (Fields & Kuperberg, 2012 with these stimuli, participants were tasked with producing a third sentence continuing the scenario. We observed a larger LPC to emotional words than neutral words in both the self-relevant and other-relevant scenarios, but this effect was smaller in the self-relevant scenarios because the LPC was larger on the neutral words (i.e., a larger LPC to self-relevant than other-relevant neutral words. In the present work, participants simply answered comprehension questions that did not refer to the emotional aspects of the scenario. Here we observed quite a different pattern of interaction between self-relevance and emotion: the LPC was larger to emotional versus neutral words in the self-relevant scenarios only, and there was no effect of self-relevance on neutral words. Taken together, these findings suggest that the LPC reflects a dynamic interaction between specific task demands, the emotional properties of a stimulus, and contextual self-relevance. We conclude by discussing implications and future directions for a functional theory of the emotional LPC.