WorldWideScience

Sample records for sight word recognition

  1. Sight Word Recognition among Young Children At-Risk: Picture-Supported vs. Word-Only

    Science.gov (United States)

    Meadan, Hedda; Stoner, Julia B.; Parette, Howard P.

    2008-01-01

    A quasi-experimental design was used to investigate the impact of Picture Communication Symbols (PCS) on sight word recognition by young children identified as "at risk" for academic and social-behavior difficulties. Ten pre-primer and 10 primer Dolch words were presented to 23 students in the intervention group and 8 students in the…

  2. An analysis of initial acquisition and maintenance of sight words following picture matching and copy cover, and compare teaching methods.

    OpenAIRE

    Conley, Colleen M; Derby, K Mark; Roberts-Gwinn, Michelle; Weber, Kimberly P; McLaughlin, T E

    2004-01-01

    This study compared the copy, cover, and compare method to a picture-word matching method for teaching sight word recognition. Participants were 5 kindergarten students with less than preprimer sight word vocabularies who were enrolled in a public school in the Pacific Northwest. A multielement design was used to evaluate the effects of the two interventions. Outcomes suggested that sight words taught using the copy, cover, and compare method resulted in better maintenance of word recognition...

  3. Evaluating a Computer Flash-Card Sight-Word Recognition Intervention with Self-Determined Response Intervals in Elementary Students with Intellectual Disability

    Science.gov (United States)

    Cazzell, Samantha; Skinner, Christopher H.; Ciancio, Dennis; Aspiranti, Kathleen; Watson, Tiffany; Taylor, Kala; McCurdy, Merilee; Skinner, Amy

    2017-01-01

    A concurrent multiple-baseline across-tasks design was used to evaluate the effectiveness of a computer flash-card sight-word recognition intervention with elementary-school students with intellectual disability. This intervention allowed the participants to self-determine each response interval and resulted in both participants acquiring…

  4. An analysis of initial acquisition and maintenance of sight words following picture matching and copy cover, and compare teaching methods.

    Science.gov (United States)

    Conley, Colleen M; Derby, K Mark; Roberts-Gwinn, Michelle; Weber, Kimberly P; McLaughlin, T E

    2004-01-01

    This study compared the copy, cover, and compare method to a picture-word matching method for teaching sight word recognition. Participants were 5 kindergarten students with less than preprimer sight word vocabularies who were enrolled in a public school in the Pacific Northwest. A multielement design was used to evaluate the effects of the two interventions. Outcomes suggested that sight words taught using the copy, cover, and compare method resulted in better maintenance of word recognition when compared to the picture-matching intervention. Benefits to students and the practicality of employing the word-level teaching methods are discussed.

  5. The Use of an Autonomous Pedagogical Agent and Automatic Speech Recognition for Teaching Sight Words to Students with Autism Spectrum Disorder

    Science.gov (United States)

    Saadatzi, Mohammad Nasser; Pennington, Robert C.; Welch, Karla C.; Graham, James H.; Scott, Renee E.

    2017-01-01

    In the current study, we examined the effects of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and constant time delay during the instruction of reading sight words aloud to young adults with autism spectrum disorder. We used a concurrent multiple baseline across participants design to…

  6. Using Constant Time Delay to Teach Braille Word Recognition

    Science.gov (United States)

    Hooper, Jonathan; Ivy, Sarah; Hatton, Deborah

    2014-01-01

    Introduction: Constant time delay has been identified as an evidence-based practice to teach print sight words and picture recognition (Browder, Ahlbrim-Delzell, Spooner, Mims, & Baker, 2009). For the study presented here, we tested the effectiveness of constant time delay to teach new braille words. Methods: A single-subject multiple baseline…

  7. Sight Word and Phonics Training in Children with Dyslexia

    Science.gov (United States)

    McArthur, Genevieve; Castles, Anne; Kohnen, Saskia; Larsen, Linda; Jones, Kristy; Anandakumar, Thushara; Banales, Erin

    2015-01-01

    The aims of this study were to (a) compare sight word training and phonics training in children with dyslexia, and (b) determine if different orders of sight word and phonics training have different effects on the reading skills of children with dyslexia. One group of children (n = 36) did 8 weeks of phonics training (reading via grapheme-phoneme…

  8. Errorless discrimination and picture fading as techniques for teaching sight words to TMR students.

    Science.gov (United States)

    Walsh, B F; Lamberts, F

    1979-03-01

    The effectiveness of two approaches for teaching beginning sight words to 30 TMR students was compared. In Dorry and Zeaman's picture-fading technique, words are taught through association with pictures that are faded out over a series of trials, while in the Edmark program errorless-discrimination technique, words are taught through shaped sequences of visual and auditory--visual matching-to-sample, with the target word first appearing alone and eventually appearing with orthographically similar words. Students were instructed on two lists of 10 words each, one list in the picture-fading and one in the discrimination method, in a double counter-balanced, repeated-measures design. Covariance analysis on three measures (word identification, word recognition, and picture--word matching) showed highly significant differences between the two methods. Students' performance was better after instruction with the errorless-discrimination method than after instruction with the picture-fading method. The findings on picture fading were interpreted as indicating a possible failure of the shifting of control from picture to printed word that earlier researchers have hypothesized as occurring.

  9. The attentional blink is related to phonemic decoding, but not sight-word recognition, in typically reading adults.

    Science.gov (United States)

    Tyson-Parry, Maree M; Sailah, Jessica; Boyes, Mark E; Badcock, Nicholas A

    2015-10-01

    This research investigated the relationship between the attentional blink (AB) and reading in typical adults. The AB is a deficit in the processing of the second of two rapidly presented targets when it occurs in close temporal proximity to the first target. Specifically, this experiment examined whether the AB was related to both phonological and sight-word reading abilities, and whether the relationship was mediated by accuracy on a single-target rapid serial visual processing task (single-target accuracy). Undergraduate university students completed a battery of tests measuring reading ability, non-verbal intelligence, and rapid automatised naming, in addition to rapid serial visual presentation tasks in which they were required to identify either two (AB task) or one (single target task) target/s (outlined shapes: circle, square, diamond, cross, and triangle) in a stream of random-dot distractors. The duration of the AB was related to phonological reading (n=41, β=-0.43): participants who exhibited longer ABs had poorer phonemic decoding skills. The AB was not related to sight-word reading. Single-target accuracy did not mediate the relationship between the AB and reading, but was significantly related to AB depth (non-linear fit, R(2)=.50): depth reflects the maximal cost in T2 reporting accuracy in the AB. The differential relationship between the AB and phonological versus sight-word reading implicates common resources used for phonemic decoding and target consolidation, which may be involved in cognitive control. The relationship between single-target accuracy and the AB is discussed in terms of cognitive preparation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. WORD LEVEL DISCRIMINATIVE TRAINING FOR HANDWRITTEN WORD RECOGNITION

    NARCIS (Netherlands)

    Chen, W.; Gader, P.

    2004-01-01

    Word level training refers to the process of learning the parameters of a word recognition system based on word level criteria functions. Previously, researchers trained lexicon­driven handwritten word recognition systems at the character level individually. These systems generally use statistical

  11. The Relation of Visual and Auditory Aptitudes to First Grade Low Readers' Achievement under Sight-Word and Systematic Phonic Instructions. Research Report #36.

    Science.gov (United States)

    Gallistel, Elizabeth; And Others

    Ten auditory and ten visual aptitude measures were administered in the middle of first grade to a sample of 58 low readers. More than half of this low reader sample had scored more than a year below expected grade level on two or more aptitudes. Word recognition measures were administered after four months of sight word instruction and again after…

  12. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  13. An Evaluation of Project iRead: A Program Created to Improve Sight Word Recognition

    Science.gov (United States)

    Marshall, Theresa Meade

    2014-01-01

    This program evaluation was undertaken to examine the relationship between participation in Project iRead and student gains in word recognition, fluency, and comprehension as measured by the Phonological Awareness Literacy Screening (PALS) Test. Linear regressions compared the 2012-13 PALS results from 5,140 first and second grade students at…

  14. Letter position coding across modalities: braille and sighted reading of sentences with jumbled words.

    Science.gov (United States)

    Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo

    2015-04-01

    This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.

  15. L2 Word Recognition: Influence of L1 Orthography on Multi-Syllabic Word Recognition

    Science.gov (United States)

    Hamada, Megumi

    2017-01-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…

  16. Can pictures promote the acquisition of sight-word reading? An evaluation of two potential instructional strategies.

    Science.gov (United States)

    Richardson, Amy R; Lerman, Dorothea C; Nissen, Melissa A; Luck, Kally M; Neal, Ashley E; Bao, Shimin; Tsami, Loukia

    2017-01-01

    Sight-word instruction can be a useful supplement to phonics-based methods under some circumstances. Nonetheless, few studies have evaluated the conditions under which pictures may be used successfully to teach sight-word reading. In this study, we extended prior research by examining two potential strategies for reducing the effects of overshadowing when using picture prompts. Five children with developmental disabilities and two typically developing children participated. In the first experiment, the therapist embedded sight words within pictures but gradually faded in the pictures as needed using a least-to-most prompting hierarchy. In the second experiment, the therapist embedded text-to-picture matching within the sight-word reading sessions. Results suggested that these strategies reduced the interference typically observed with picture prompts and enhanced performance during teaching sessions for the majority of participants. Text-to-picture matching also accelerated mastery of the sight words relative to a condition under which the therapist presented text without pictures. © 2016 Society for the Experimental Analysis of Behavior.

  17. Voice congruency facilitates word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2013-01-01

    Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  18. Voice congruency facilitates word recognition.

    Directory of Open Access Journals (Sweden)

    Sandra Campeanu

    Full Text Available Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  19. L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.

    Science.gov (United States)

    Hamada, Megumi

    2017-10-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.

  20. Do handwritten words magnify lexical effects in visual word recognition?

    Science.gov (United States)

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  1. Syllabic Length Effect in Visual Word Recognition

    Directory of Open Access Journals (Sweden)

    Roya Ranjbar Mohammadi

    2014-07-01

    Full Text Available Studies on visual word recognition have resulted in different and sometimes contradictory proposals as Multi-Trace Memory Model (MTM, Dual-Route Cascaded Model (DRC, and Parallel Distribution Processing Model (PDP. The role of the number of syllables in word recognition was examined by the use of five groups of English words and non-words. The reaction time of the participants to these words was measured using reaction time measuring software. The results indicated that there was syllabic effect on recognition of both high and low frequency words. The pattern was incremental in terms of syllable number. This pattern prevailed in high and low frequency words and non-words except in one syllable words. In general, the results are in line with the PDP model which claims that a single processing mechanism is used in both words and non-words recognition. In other words, the findings suggest that lexical items are mainly processed via a lexical route.  A pedagogical implication of the findings would be that reading in English as a foreign language involves analytical processing of the syllable of the words.

  2. Cognitive aspects of haptic form recognition by blind and sighted subjects.

    Science.gov (United States)

    Bailes, S M; Lambert, R M

    1986-11-01

    Studies using haptic form recognition tasks have generally concluded that the adventitiously blind perform better than the congenitally blind, implicating the importance of early visual experience in improved spatial functioning. The hypothesis was tested that the adventitiously blind have retained some ability to encode successive information obtained haptically in terms of a global visual representation, while the congenitally blind use a coding system based on successive inputs. Eighteen blind (adventitiously and congenitally) and 18 sighted (blindfolded and performing with vision) subjects were tested on their recognition of raised line patterns when the standard was presented in segments: in immediate succession, or with unfilled intersegmental delays of 5, 10, or 15 seconds. The results did not support the above hypothesis. Three main findings were obtained: normally sighted subjects were both faster and more accurate than the other groups; all groups improved in accuracy of recognition as a function of length of interstimulus interval; sighted subjects tended to report using strategies with a strong verbal component while the blind tended to rely on imagery coding. These results are explained in terms of information-processing theory consistent with dual encoding systems in working memory.

  3. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Orthographic Mapping in the Acquisition of Sight Word Reading, Spelling Memory, and Vocabulary Learning

    Science.gov (United States)

    Ehri, Linnea C.

    2014-01-01

    Orthographic mapping (OM) involves the formation of letter-sound connections to bond the spellings, pronunciations, and meanings of specific words in memory. It explains how children learn to read words by sight, to spell words from memory, and to acquire vocabulary words from print. This development is portrayed by Ehri (2005a) as a sequence of…

  5. The effect of word concreteness on recognition memory.

    Science.gov (United States)

    Fliessbach, K; Weis, S; Klaver, P; Elger, C E; Weber, B

    2006-09-01

    Concrete words that are readily imagined are better remembered than abstract words. Theoretical explanations for this effect either claim a dual coding of concrete words in the form of both a verbal and a sensory code (dual-coding theory), or a more accessible semantic network for concrete words than for abstract words (context-availability theory). However, the neural mechanisms of improved memory for concrete versus abstract words are poorly understood. Here, we investigated the processing of concrete and abstract words during encoding and retrieval in a recognition memory task using event-related functional magnetic resonance imaging (fMRI). As predicted, memory performance was significantly better for concrete words than for abstract words. Abstract words elicited stronger activations of the left inferior frontal cortex both during encoding and recognition than did concrete words. Stronger activation of this area was also associated with successful encoding for both abstract and concrete words. Concrete words elicited stronger activations bilaterally in the posterior inferior parietal lobe during recognition. The left parietal activation was associated with correct identification of old stimuli. The anterior precuneus, left cerebellar hemisphere and the posterior and anterior cingulate cortex showed activations both for successful recognition of concrete words and for online processing of concrete words during encoding. Additionally, we observed a correlation across subjects between brain activity in the left anterior fusiform gyrus and hippocampus during recognition of learned words and the strength of the concreteness effect. These findings support the idea of specific brain processes for concrete words, which are reactivated during successful recognition.

  6. Foreign language learning, hyperlexia, and early word recognition.

    Science.gov (United States)

    Sparks, R L; Artzer, M

    2000-01-01

    Children with hyperlexia read words spontaneously before the age of five, have impaired comprehension on both listening and reading tasks, and have word recognition skill above expectations based on cognitive and linguistic abilities. One student with hyperlexia and another student with higher word recognition than comprehension skills who started to read words at a very early age were followed over several years from the primary grades through high school when both were completing a second-year Spanish course. The purpose of the present study was to examine the foreign language (FL) word recognition, spelling, reading comprehension, writing, speaking, and listening skills of the two students and another high school student without hyperlexia. Results showed that the student without hyperlexia achieved higher scores than the hyperlexic student and the student with above average word recognition skills on most FL proficiency measures. The student with hyperlexia and the student with above average word recognition skills achieved higher scores on the Spanish proficiency tasks that required the exclusive use of phonological (pronunciation) and phonological/orthographic (word recognition, spelling) skills than on Spanish proficiency tasks that required the use of listening comprehension and speaking and writing skills. The findings provide support for the notion that word recognition and spelling in a FL may be modular processes and exist independently of general cognitive and linguistic skills. Results also suggest that students may have stronger FL learning skills in one language component than in other components of language, and that there may be a weak relationship between FL word recognition and oral proficiency in the FL.

  7. Brain activation during word identification and word recognition

    DEFF Research Database (Denmark)

    Jernigan, Terry L.; Ostergaard, Arne L.; Law, Ian

    1998-01-01

    Previous memory research has suggested that the effects of prior study observed in priming tasks are functionally, and neurobiologically, distinct phenomena from the kind of memory expressed in conventional (explicit) memory tests. Evidence for this position comes from observed dissociations...... between memory scores obtained with the two kinds of tasks. However, there is continuing controversy about the meaning of these dissociations. In recent studies, Ostergaard (1998a, Memory Cognit. 26:40-60; 1998b, J. Int. Neuropsychol. Soc., in press) showed that simply degrading visual word stimuli can...... dramatically alter the degree to which word priming shows a dissociation from word recognition; i.e., effects of a number of factors on priming paralleled their effects on recognition memory tests when the words were degraded at test. In the present study, cerebral blood flow changes were measured while...

  8. Visual recognition of permuted words

    Science.gov (United States)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  9. Anticipatory coarticulation facilitates word recognition in toddlers.

    Science.gov (United States)

    Mahr, Tristan; McMillan, Brianna T M; Saffran, Jenny R; Ellis Weismer, Susan; Edwards, Jan

    2015-09-01

    Children learn from their environments and their caregivers. To capitalize on learning opportunities, young children have to recognize familiar words efficiently by integrating contextual cues across word boundaries. Previous research has shown that adults can use phonetic cues from anticipatory coarticulation during word recognition. We asked whether 18-24 month-olds (n=29) used coarticulatory cues on the word "the" when recognizing the following noun. We performed a looking-while-listening eyetracking experiment to examine word recognition in neutral vs. facilitating coarticulatory conditions. Participants looked to the target image significantly sooner when the determiner contained facilitating coarticulatory cues. These results provide the first evidence that novice word-learners can take advantage of anticipatory sub-phonemic cues during word recognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2012-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836

  11. Visual word recognition across the adult lifespan.

    Science.gov (United States)

    Cohen-Shikora, Emily R; Balota, David A

    2016-08-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult life span and across a large set of stimuli (N = 1,187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgment). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the word recognition system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly because of sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using 3 different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Adult Word Recognition and Visual Sequential Memory

    Science.gov (United States)

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  13. Speed and automaticity of word recognition - inseparable twins?

    DEFF Research Database (Denmark)

    Poulsen, Mads; Asmussen, Vibeke; Elbro, Carsten

    'Speed and automaticity' of word recognition is a standard collocation. However, it is not clear whether speed and automaticity (i.e., effortlessness) make independent contributions to reading comprehension. In theory, both speed and automaticity may save cognitive resources for comprehension...... processes. Hence, the aim of the present study was to assess the unique contributions of word recognition speed and automaticity to reading comprehension while controlling for decoding speed and accuracy. Method: 139 Grade 5 students completed tests of reading comprehension and computer-based tests of speed...... of decoding and word recognition together with a test of effortlessness (automaticity) of word recognition. Effortlessness was measured in a dual task in which participants were presented with a word enclosed in an unrelated figure. The task was to read the word and decide whether the figure was a triangle...

  14. Visual Word Recognition Across the Adult Lifespan

    Science.gov (United States)

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  15. The cingulo-opercular network provides word-recognition benefit.

    Science.gov (United States)

    Vaden, Kenneth I; Kuchinsky, Stefanie E; Cute, Stephanie L; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A

    2013-11-27

    Recognizing speech in difficult listening conditions requires considerable focus of attention that is often demonstrated by elevated activity in putative attention systems, including the cingulo-opercular network. We tested the prediction that elevated cingulo-opercular activity provides word-recognition benefit on a subsequent trial. Eighteen healthy, normal-hearing adults (10 females; aged 20-38 years) performed word recognition (120 trials) in multi-talker babble at +3 and +10 dB signal-to-noise ratios during a sparse sampling functional magnetic resonance imaging (fMRI) experiment. Blood oxygen level-dependent (BOLD) contrast was elevated in the anterior cingulate cortex, anterior insula, and frontal operculum in response to poorer speech intelligibility and response errors. These brain regions exhibited significantly greater correlated activity during word recognition compared with rest, supporting the premise that word-recognition demands increased the coherence of cingulo-opercular network activity. Consistent with an adaptive control network explanation, general linear mixed model analyses demonstrated that increased magnitude and extent of cingulo-opercular network activity was significantly associated with correct word recognition on subsequent trials. These results indicate that elevated cingulo-opercular network activity is not simply a reflection of poor performance or error but also supports word recognition in difficult listening conditions.

  16. Syllable Transposition Effects in Korean Word Recognition

    Science.gov (United States)

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  17. Emotion and language: Valence and arousal affect word recognition

    Science.gov (United States)

    Brysbaert, Marc; Warriner, Amy Beth

    2014-01-01

    Emotion influences most aspects of cognition and behavior, but emotional factors are conspicuously absent from current models of word recognition. The influence of emotion on word recognition has mostly been reported in prior studies on the automatic vigilance for negative stimuli, but the precise nature of this relationship is unclear. Various models of automatic vigilance have claimed that the effect of valence on response times is categorical, an inverted-U, or interactive with arousal. The present study used a sample of 12,658 words, and included many lexical and semantic control factors, to determine the precise nature of the effects of arousal and valence on word recognition. Converging empirical patterns observed in word-level and trial-level data from lexical decision and naming indicate that valence and arousal exert independent monotonic effects: Negative words are recognized more slowly than positive words, and arousing words are recognized more slowly than calming words. Valence explained about 2% of the variance in word recognition latencies, whereas the effect of arousal was smaller. Valence and arousal do not interact, but both interact with word frequency, such that valence and arousal exert larger effects among low-frequency words than among high-frequency words. These results necessitate a new model of affective word processing whereby the degree of negativity monotonically and independently predicts the speed of responding. This research also demonstrates that incorporating emotional factors, especially valence, improves the performance of models of word recognition. PMID:24490848

  18. Automated smartphone audiometry: Validation of a word recognition test app.

    Science.gov (United States)

    Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J

    2018-03-01

    Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b. Laryngoscope, 128:707-712, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  19. Learning during processing Word learning doesn’t wait for word recognition to finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  20. [Representation of letter position in visual word recognition process].

    Science.gov (United States)

    Makioka, S

    1994-08-01

    Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.

  1. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    Science.gov (United States)

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  2. Sight Word Reading in Prereaders: Use of Logographic vs. Alphabetic Access Routes.

    Science.gov (United States)

    Scott, Judith Anne; Ehri, Linnea C.

    1990-01-01

    Investigates whether prereaders who knew all their letters are better at forming logographic access routes than letter-sound access routes into memory from words read by sight. Concludes that prereaders become capable of forming letter-sound access routes when they learn letters well enough to take advantage of the phonetic cues the letters…

  3. Descriptive analysis and comparison of strategic incremental rehearsal to "Business as Usual" sight-word instruction for an adult nonreader with intellectual disability.

    Science.gov (United States)

    Richman, David M; Grubb, Laura; Thompson, Samuel

    2018-01-01

    Strategic Incremental Rehearsal (SIR) is an effective method for teaching sight-word acquisition, but has neither been evaluated for use in adults with an intellectual disability, nor directly compared to the ongoing instruction in the natural environment. Experimental analysis of sight word acquisition via an alternating treatment design was conducted with a 23-year-old woman with Down syndrome. SIR was compared to the current reading instruction (CRI) in a classroom for young adults with intellectual disabilities. CRI procedures included non-contingent praise, receptive touch prompts ("touch the word bat"), echoic prompts ("say bat"), textual prompts ("read the word"), and pre-determined introduction of new words. SIR procedures included textual prompts on flash cards, contingent praise, corrective feedback, and mastery-based introduction of new words. The results indicated that SIR was associated with more rapid acquisition of sight words than CRI. Directions for future research could include systematic comparisons to other procedures, and evaluations of procedural permutations of SIR.

  4. Clinical Strategies for Sampling Word Recognition Performance.

    Science.gov (United States)

    Schlauch, Robert S; Carney, Edward

    2018-04-17

    Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list. The PB max simulations were conducted on a "client" with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance. A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score. A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

  5. The Role of Antibody in Korean Word Recognition

    Science.gov (United States)

    Lee, Chang Hwan; Lee, Yoonhyoung; Kim, Kyungil

    2010-01-01

    A subsyllabic phonological unit, the antibody, has received little attention as a potential fundamental processing unit in word recognition. The psychological reality of the antibody in Korean recognition was investigated by looking at the performance of subjects presented with nonwords and words in the lexical decision task. In Experiment 1, the…

  6. Role of syllable segmentation processes in peripheral word recognition.

    Science.gov (United States)

    Bernard, Jean-Baptiste; Calabrèse, Aurélie; Castet, Eric

    2014-12-01

    Previous studies of foveal visual word recognition provide evidence for a low-level syllable decomposition mechanism occurring during the recognition of a word. We investigated if such a decomposition mechanism also exists in peripheral word recognition. Single words were visually presented to subjects in the peripheral field using a 6° square gaze-contingent simulated central scotoma. In the first experiment, words were either unicolor or had their adjacent syllables segmented with two different colors (color/syllable congruent condition). Reaction times for correct word identification were measured for the two different conditions and for two different print sizes. Results show a significant decrease in reaction time for the color/syllable congruent condition compared with the unicolor condition. A second experiment suggests that this effect is specific to syllable decomposition and results from strategic, presumably involving attentional factors, rather than stimulus-driven control.

  7. Auditory word recognition is not more sensitive to word-initial than to word-final stimulus information

    NARCIS (Netherlands)

    Vlugt, van der M.J.; Nooteboom, S.G.

    1986-01-01

    Several accounts of human recognition of spoken words a.!!llign special importance to stimulus-word onsets. The experiment described here was d~igned to find out whether such a word-beginning superiority effect, which ill supported by experimental evidence of various kinds, is due to a special

  8. Exploring EFL Students’ Reading Comprehension Process through Their Life Experiences and the Sight Word Strategy

    Directory of Open Access Journals (Sweden)

    Jennifer Camargo

    2010-12-01

    Full Text Available Due to the role language and literature play in the construction of social, economic and cultural systems, reading comprehension has become a growing challenge. This study examined how the relationship between English as a foreign language reading comprehension and life experiences while using the Sight Word Strategy could prove significant. Fifth graders at a public school in Bogotá participated in this study. Data were collected using tape recordings, field notes, archival data and students’ reflections. Analysis indicated that comprehension and construction of meaning were generated by sharing life experiences and through the interaction produced in each one of the Sight Word Strategy stages. The study suggested further research into a more encompassing definition of reading comprehension and life experiences correlation as an appropriate goal for English as a foreign language.

  9. Item Effects in Recognition Memory for Words

    Science.gov (United States)

    Freeman, Emily; Heathcote, Andrew; Chalmers, Kerry; Hockley, William

    2010-01-01

    We investigate the effects of word characteristics on episodic recognition memory using analyses that avoid Clark's (1973) "language-as-a-fixed-effect" fallacy. Our results demonstrate the importance of modeling word variability and show that episodic memory for words is strongly affected by item noise (Criss & Shiffrin, 2004), as measured by the…

  10. The effects of using flashcards with reading racetrack to teach letter sounds, sight words, and math facts to elementary students with learning disabilities

    Directory of Open Access Journals (Sweden)

    Rachel Erbey

    2011-07-01

    Full Text Available The purpose of this study was to measure the effects of reading racetrack and flashcards when teaching phonics, sight words, and addition facts. The participants for the sight word and phonics portion of this study were two seven-year-old boys in the second grade. Both participants were diagnosed with a learning disability. The third participant was diagnosed with attention deficit hyperactivity disorder by his pediatrician and with a learning disability and traumatic brain injury by his school’s multi-disciplinary team.. The dependent measures were corrects and errors when reading from a first grade level sight word list. Math facts were selected based on a 100 add fact test for the third participant. The study demonstrated that racetracks paired with the flashcard intervention improved the students’ number of corrects for each subject-matter area (phonics, sight words, and math facts. However, the results show that some students had more success with it than others. These outcomes clearly warrant further research.

  11. Rapid Word Recognition as a Measure of Word-Level Automaticity and Its Relation to Other Measures of Reading

    Science.gov (United States)

    Frye, Elizabeth M.; Gosky, Ross

    2012-01-01

    The present study investigated the relationship between rapid recognition of individual words (Word Recognition Test) and two measures of contextual reading: (1) grade-level Passage Reading Test (IRI passage) and (2) performance on standardized STAR Reading Test. To establish if time of presentation on the word recognition test was a factor in…

  12. Infant word recognition: Insights from TRACE simulations.

    Science.gov (United States)

    Mayor, Julien; Plunkett, Kim

    2014-02-01

    The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants' graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan's stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life.

  13. Prefixes versus suffixes: a search for a word-beginning superiority effect in word recognition from degraded speech

    NARCIS (Netherlands)

    Nooteboom, S.G.; Vlugt, van der M.J.

    1985-01-01

    This paper reports on a word recognition experiment in search of evidence for a word- beginning superiority effect in recognition from low-quality speech . In the experiment, lexical redundancy was controlled by combining monosyllable word stems with strongly constraining or weakly constraining

  14. Discourse context and the recognition of reduced and canonical spoken words

    OpenAIRE

    Brouwer, S.; Mitterer, H.; Huettig, F.

    2013-01-01

    In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" ...

  15. Beyond word recognition: understanding pediatric oral health literacy.

    Science.gov (United States)

    Richman, Julia Anne; Huebner, Colleen E; Leggott, Penelope J; Mouradian, Wendy E; Mancl, Lloyd A

    2011-01-01

    Parental oral health literacy is proposed to be an indicator of children's oral health. The purpose of this study was to test if word recognition, commonly used to assess health literacy, is an adequate measure of pediatric oral health literacy. This study evaluated 3 aspects of oral health literacy and parent-reported child oral health. A 3-part pediatric oral health literacy inventory was created to assess parents' word recognition, vocabulary knowledge, and comprehension of 35 terms used in pediatric dentistry. The inventory was administered to 45 English-speaking parents of children enrolled in Head Start. Parents' ability to read dental terms was not associated with vocabulary knowledge (r=0.29, P.06) of the terms. Vocabulary knowledge was strongly associated with comprehension (r=0.80, PParent-reported child oral health status was not associated with word recognition, vocabulary knowledge, or comprehension; however parents reporting either excellent or fair/poor ratings had higher scores on all components of the inventory. Word recognition is an inadequate indicator of comprehension of pediatric oral health concepts; pediatric oral health literacy is a multifaceted construct. Parents with adequate reading ability may have difficulty understanding oral health information.

  16. An Investigation of the Role of Grapheme Units in Word Recognition

    Science.gov (United States)

    Lupker, Stephen J.; Acha, Joana; Davis, Colin J.; Perea, Manuel

    2012-01-01

    In most current models of word recognition, the word recognition process is assumed to be driven by the activation of letter units (i.e., that letters are the perceptual units in reading). An alternative possibility is that the word recognition process is driven by the activation of grapheme units, that is, that graphemes, rather than letters, are…

  17. The role of familiarity in associative recognition of unitized compound word pairs.

    Science.gov (United States)

    Ahmad, Fahad N; Hockley, William E

    2014-01-01

    This study examined the effect of unitization and contribution of familiarity in the recognition of word pairs. Compound words were presented as word pairs and were contrasted with noncompound word pairs in an associative recognition task. In Experiments 1 and 2, yes-no recognition hit and false-alarm rates were significantly higher for compound than for noncompound word pairs, with no difference in discrimination in both within- and between-subject comparisons. Experiment 2 also showed that item recognition was reduced for words from compound compared to noncompound word pairs, providing evidence of the unitization of the compound pairs. A two-alternative forced-choice test used in Experiments 3A and 3B provided evidence that the concordant effect for compound word pairs was largely due to familiarity. A discrimination advantage for compound word pairs was also seen in these experiments. Experiment 4A showed that a different pattern of results is seen when repeated noncompound word pairs are compared to compound word pairs. Experiment 4B showed that memory for the individual items of compound word pairs was impaired relative to items in repeated and nonrepeated noncompound word pairs, and Experiment 5 demonstrated that this effect is eliminated when the elements of compound word pairs are not unitized. The concordant pattern seen in yes-no recognition and the discrimination advantage in forced-choice recognition for compound relative to noncompound word pairs is due to greater reliance on familiarity at test when pairs are unitized.

  18. Infant word recognition: Insights from TRACE simulations☆

    Science.gov (United States)

    Mayor, Julien; Plunkett, Kim

    2014-01-01

    The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants’ graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan’s stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life. PMID:24493907

  19. Word Reading and Processing of the Identity and Order of Letters by Children with Low Vision and Sighted Children

    Science.gov (United States)

    Gompel, Marjolein; van Bon, Wim H. J.; Schreuder, Robert

    2004-01-01

    Two aspects of word reading were investigated in two word-naming experiments: the identification of the constituent letters of a word and the processing of letter-order information. Both experiments showed qualitative differences between children with low vision and sighted children, but no quantitative or qualitative differences within the group…

  20. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    Science.gov (United States)

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  1. Auditory word recognition: extrinsic and intrinsic effects of word frequency.

    Science.gov (United States)

    Connine, C M; Titone, D; Wang, J

    1993-01-01

    Two experiments investigated the influence of word frequency in a phoneme identification task. Speech voicing continua were constructed so that one endpoint was a high-frequency word and the other endpoint was a low-frequency word (e.g., best-pest). Experiment 1 demonstrated that ambiguous tokens were labeled such that a high-frequency word was formed (intrinsic frequency effect). Experiment 2 manipulated the frequency composition of the list (extrinsic frequency effect). A high-frequency list bias produced an exaggerated influence of frequency; a low-frequency list bias showed a reverse frequency effect. Reaction time effects were discussed in terms of activation and postaccess decision models of frequency coding. The results support a late use of frequency in auditory word recognition.

  2. Imageability and age of acquisition effects in disyllabic word recognition.

    Science.gov (United States)

    Cortese, Michael J; Schock, Jocelyn

    2013-01-01

    Imageability and age of acquisition (AoA) effects, as well as key interactions between these variables and frequency and consistency, were examined via multiple regression analyses for 1,936 disyllabic words, using reaction time and accuracy measures from the English Lexicon Project. Both imageability and AoA accounted for unique variance in lexical decision and naming reaction time performance. In addition, across both tasks, AoA and imageability effects were larger for low-frequency words than high-frequency words, and imageability effects were larger for later acquired than earlier acquired words. In reading aloud, consistency effects in reaction time were larger for later acquired words than earlier acquired words, but consistency did not interact with imageability in the reaction time analysis. These results provide further evidence that multisyllabic word recognition is similar to monosyllabic word recognition and indicate that AoA and imageability are valid predictors of word recognition performance. In addition, the results indicate that meaning exerts a larger influence in the reading aloud of multisyllabic words than monosyllabic words. Finally, parallel-distributed-processing approaches provide a useful theoretical framework to explain the main effects and interactions.

  3. Large-corpus phoneme and word recognition and the generality of lexical context in CVC word perception.

    Science.gov (United States)

    Gelfand, Jessica T; Christie, Robert E; Gelfand, Stanley A

    2014-02-01

    Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For consonant-vowel-consonant (CVC) nonsense syllables, j ∼ 3 because all 3 phonemes are needed to identify the syllable, but j ∼ 2.5 for real-word CVCs (revealing ∼2.5 independent perceptual units) because higher level contributions such as lexical knowledge enable word recognition even if less than 3 phonemes are accurately received. These findings were almost exclusively determined with the 120-word corpus of the isophonemic word lists (Boothroyd, 1968a; Boothroyd & Nittrouer, 1988), presented one word at a time. It is therefore possible that its generality or applicability may be limited. This study thus determined j by using a much larger and less restricted corpus of real-word CVCs presented in 3-word groups as well as whether j is influenced by test size. The j-factor for real-word CVCs was derived from the recognition performance of 223 individuals with a broad range of hearing sensitivity by using the Tri-Word Test (Gelfand, 1998), which involves 50 three-word presentations and a corpus of 450 words. The influence of test size was determined from a subsample of 96 participants with separate scores for the first 10, 20, and 25 (and all 50) presentation sets of the full test. The mean value of j was 2.48 with a 95% confidence interval of 2.44-2.53, which is in good agreement with values obtained with isophonemic word lists, although its value varies among individuals. A significant correlation was found between percent-correct scores and j, but it was small and accounted for only 12.4% of the variance in j for phoneme scores ≥60%. Mean j-factors for the 10-, 20-, 25-, and 50-set test sizes were between 2.49 and 2.53 and were not

  4. Effects of dynamic text in an AAC app on sight word reading for individuals with autism spectrum disorder.

    Science.gov (United States)

    Caron, Jessica; Light, Janice; Holyfield, Christine; McNaughton, David

    2018-06-01

    The purpose of this study was to investigate the effects of Transition to Literacy (T2L) software features (i.e., dynamic text and speech output upon selection of a graphic symbol) within a grid display in an augmentative and alternative communication (AAC) app, on the sight word reading skills of individuals with autism spectrum disorders (ASD) and complex communication needs. The study implemented a single-subject multiple probe research design across one set of three participants. The same design was utilized with an additional set of two participants. As part of the intervention, the participants were exposed to an AAC app with the T2L features during a highly structured matching task. With only limited exposure to the features, the five participants all demonstrated increased accuracy of identification of 12 targeted sight words. This study provides preliminary evidence that redesigning AAC apps to include the provision of dynamic text combined with speech output, can positively impact the sight-word reading of participants during a structured task. This adaptation in AAC system design could be used to complement literacy instruction and to potentially infuse components of literacy learning into daily communication.

  5. Asymmetries in Early Word Recognition: The Case of Stops and Fricatives

    Science.gov (United States)

    Altvater-Mackensen, Nicole; van der Feest, Suzanne V. H.; Fikkert, Paula

    2014-01-01

    Toddlers' discrimination of native phonemic contrasts is generally unproblematic. Yet using those native contrasts in word learning and word recognition can be more challenging. In this article, we investigate perceptual versus phonological explanations for asymmetrical patterns found in early word recognition. We systematically investigated the…

  6. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    Science.gov (United States)

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  7. Improved word recognition for observers with age-related maculopathies using compensation filters

    Science.gov (United States)

    Lawton, Teri B.

    1988-01-01

    A method for improving word recognition for people with age-related maculopathies, which cause a loss of central vision, is discussed. It is found that the use of individualized compensation filters based on an person's normalized contrast sensitivity function can improve word recognition for people with age-related maculopathies. It is shown that 27-70 pct more magnification is needed for unfiltered words compared to filtered words. The improvement in word recognition is positively correlated with the severity of vision loss.

  8. Neighbourhood frequency effects in visual word recognition and naming

    NARCIS (Netherlands)

    Grainger, I.J.

    1988-01-01

    Two experiments are reported that examine the influence of a given word's ortllographic neighbours (orthographically similar words) on the recognition and pronunciation of that word. In Experiment 1 (lexical decision) neighbourhood frequency as opposed to stimulus-word frequency was shown to have a

  9. Using an iPad® App to Improve Sight Word Reading Fluency for At-Risk First Graders

    Science.gov (United States)

    Musti-Rao, Shobana; Lo, Ya-yu; Plati, Erin

    2015-01-01

    We used a multiple baseline across word lists design nested within a multiple baseline across participants design to examine the effects of instruction delivered using an iPad® app on sight word fluency and oral reading fluency of six first graders identified as at risk for reading failure. In Study 1, three students participated in…

  10. Lexical and age effects on word recognition in noise in normal-hearing children.

    Science.gov (United States)

    Ren, Cuncun; Liu, Sha; Liu, Haihong; Kong, Ying; Liu, Xin; Li, Shujing

    2015-12-01

    The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word-recognition

  11. Handwritten Word Recognition Using Multi-view Analysis

    Science.gov (United States)

    de Oliveira, J. J.; de A. Freitas, C. O.; de Carvalho, J. M.; Sabourin, R.

    This paper brings a contribution to the problem of efficiently recognizing handwritten words from a limited size lexicon. For that, a multiple classifier system has been developed that analyzes the words from three different approximation levels, in order to get a computational approach inspired on the human reading process. For each approximation level a three-module architecture composed of a zoning mechanism (pseudo-segmenter), a feature extractor and a classifier is defined. The proposed application is the recognition of the Portuguese handwritten names of the months, for which a best recognition rate of 97.7% was obtained, using classifier combination.

  12. Interference of spoken word recognition through phonological priming from visual objects and printed words

    NARCIS (Netherlands)

    McQueen, J.M.; Hüttig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase

  13. Word and face recognition deficits following posterior cerebral artery stroke

    DEFF Research Database (Denmark)

    Kuhn, Christina D.; Asperud Thomsen, Johanne; Delfi, Tzvetelina

    2016-01-01

    Abstract Recent findings have challenged the existence of category specific brain areas for perceptual processing of words and faces, suggesting the existence of a common network supporting the recognition of both. We examined the performance of patients with focal lesions in posterior cortical...... areas to investigate whether deficits in recognition of words and faces systematically co-occur as would be expected if both functions rely on a common cerebral network. Seven right-handed patients with unilateral brain damage following stroke in areas supplied by the posterior cerebral artery were...... included (four with right hemisphere damage, three with left, tested at least 1 year post stroke). We examined word and face recognition using a delayed match-to-sample paradigm using four different categories of stimuli: cropped faces, full faces, words, and cars. Reading speed and word length effects...

  14. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  15. Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?

    Science.gov (United States)

    Haro, Juan; Ferré, Pilar

    2018-06-01

    It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these inconsistent findings may be due to the approach employed to select ambiguous words across studies. To address this issue, we conducted three LDT experiments in which we varied the measure used to classify ambiguous and unambiguous words. The results suggest that multiple unrelated meanings facilitate word recognition. In addition, we observed that the approach employed to select ambiguous words may affect the pattern of experimental results. This evidence has relevant implications for theoretical accounts of ambiguous words processing and representation.

  16. Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.

    Science.gov (United States)

    Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro

    2011-12-01

    The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights

  17. Word recognition in Alzheimer's disease: Effects of semantic degeneration.

    Science.gov (United States)

    Cuetos, Fernando; Arce, Noemí; Martínez, Carmen; Ellis, Andrew W

    2017-03-01

    Impairments of word recognition in Alzheimer's disease (AD) have been less widely investigated than impairments affecting word retrieval and production. In particular, we know little about what makes individual words easier or harder for patients with AD to recognize. We used a lexical selection task in which participants were shown sets of four items, each set consisting of one word and three non-words. The task was simply to point to the word on each trial. Forty patients with mild-to-moderate AD were significantly impaired on this task relative to matched controls who made very few errors. The number of patients with AD able to recognize each word correctly was predicted by the frequency, age of acquisition, and imageability of the words, but not by their length or number of orthographic neighbours. Patient Mini-Mental State Examination and phonological fluency scores also predicted the number of words recognized. We propose that progressive degradation of central semantic representations in AD differentially affects the ability to recognize low-imageability, low-frequency, late-acquired words, with the same factors affecting word recognition as affecting word retrieval. © 2015 The British Psychological Society.

  18. ANALYTIC WORD RECOGNITION WITHOUT SEGMENTATION BASED ON MARKOV RANDOM FIELDS

    NARCIS (Netherlands)

    Coisy, C.; Belaid, A.

    2004-01-01

    In this paper, a method for analytic handwritten word recognition based on causal Markov random fields is described. The words models are HMMs where each state corresponds to a letter; each letter is modelled by a NSHP­HMM (Markov field). Global models are build dynamically, and used for recognition

  19. Reading in Developmental Prosopagnosia: Evidence for a Dissociation Between Word and Face Recognition

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Klargaard, Solja; Petersen, Anders

    2018-01-01

    exposure durations (targeting the word superiority effect), and d) text reading. Results: Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition......, that is, impaired reading in developmental prosopagnosia. Method: We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face...... recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: a) single word reading with words of varying length, b) vocal response times in single letter and short word naming, c) recognition of single letters and short words at brief...

  20. Recognition memory for vibrotactile rhythms: an fMRI study in blind and sighted individuals.

    Science.gov (United States)

    Sinclair, Robert J; Dixit, Sachin; Burton, Harold

    2011-01-01

    Calcarine sulcal cortex possibly contributes to semantic recognition memory in early blind (EB). We assessed a recognition memory role using vibrotactile rhythms and a retrieval success paradigm involving learned "old" and "new" rhythms in EB and sighted. EB showed no activation differences in occipital cortex indicating retrieval success but replicated findings of somatosensory processing. Both groups showed retrieval success in primary somatosensory, precuneus, and orbitofrontal cortex. The S1 activity might indicate generic sensory memory processes.

  1. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    Science.gov (United States)

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Word-level recognition of multifont Arabic text using a feature vector matching approach

    Science.gov (United States)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  3. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  4. [Explicit memory for type font of words in source monitoring and recognition tasks].

    Science.gov (United States)

    Hatanaka, Yoshiko; Fujita, Tetsuya

    2004-02-01

    We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.

  5. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  6. Rapid interactions between lexical semantic and word form analysis during word recognition in context: evidence from ERPs.

    Science.gov (United States)

    Kim, Albert; Lai, Vicky

    2012-05-01

    We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."

  7. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  8. Reading component skills in dyslexia: word recognition, comprehension and processing speed.

    Science.gov (United States)

    de Oliveira, Darlene G; da Silva, Patrícia B; Dias, Natália M; Seabra, Alessandra G; Macedo, Elizeu C

    2014-01-01

    The cognitive model of reading comprehension (RC) posits that RC is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the RC model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years) were divided in a Dyslexic Group (DG; 18 children, MA = 10.78, SD = 1.66) and control group (CG 22 children, MA = 10.59, SD = 1.86). All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and RC, word recognition, processing speed, picture naming, receptive vocabulary, and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and RC, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items) and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on RC test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  9. Modeling Polymorphemic Word Recognition: Exploring Differences among Children with Early-Emerging and Late- Emerging Word Reading Difficulty

    Science.gov (United States)

    Kearns, Devin M.; Steacy, Laura M.; Compton, Donald L.; Gilbert, Jennifer K.; Goodwin, Amanda P.; Cho, Eunsoo; Lindstrom, Esther R.; Collins, Alyson A.

    2016-01-01

    Comprehensive models of derived polymorphemic word recognition skill in developing readers, with an emphasis on children with reading difficulty (RD), have not been developed. The purpose of the present study was to model individual differences in polymorphemic word recognition ability at the item level among 5th-grade children (N = 173)…

  10. Comparison of crisp and fuzzy character networks in handwritten word recognition

    Science.gov (United States)

    Gader, Paul; Mohamed, Magdi; Chiang, Jung-Hsien

    1992-01-01

    Experiments involving handwritten word recognition on words taken from images of handwritten address blocks from the United States Postal Service mailstream are described. The word recognition algorithm relies on the use of neural networks at the character level. The neural networks are trained using crisp and fuzzy desired outputs. The fuzzy outputs were defined using a fuzzy k-nearest neighbor algorithm. The crisp networks slightly outperformed the fuzzy networks at the character level but the fuzzy networks outperformed the crisp networks at the word level.

  11. Hearing taboo words can result in early talker effects in word recognition for female listeners.

    Science.gov (United States)

    Tuft, Samantha E; MᶜLennan, Conor T; Krestar, Maura L

    2018-02-01

    Previous spoken word recognition research using the long-term repetition-priming paradigm found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks, and the identity of the talker changed reaction times (RTs) were slower than when the repeated words were spoken by the same talker. Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research suggests that increased explicit and implicit attention towards the talkers can result in talker effects even during relatively fast processing. The purpose of the current study was to examine whether word meaning would influence the pattern of talker effects in an easy lexical decision task and, if so, whether results would differ depending on whether the presentation of neutral and taboo words was mixed or blocked. Regardless of presentation, participants responded to taboo words faster than neutral words. Furthermore, talker effects for the female talker emerged when participants heard both taboo and neutral words (consistent with an attention-based hypothesis), but not for participants that heard only taboo or only neutral words (consistent with the time-course hypothesis). These findings have important implications for theoretical models of spoken word recognition.

  12. The Effects of Explicit Word Recognition Training on Japanese EFL Learners

    Science.gov (United States)

    Burrows, Lance; Holsworth, Michael

    2016-01-01

    This study is a quantitative, quasi-experimental investigation focusing on the effects of word recognition training on word recognition fluency, reading speed, and reading comprehension for 151 Japanese university students at a lower-intermediate reading proficiency level. Four treatment groups were given training in orthographic, phonological,…

  13. Medical Named Entity Recognition for Indonesian Language Using Word Representations

    Science.gov (United States)

    Rahman, Arief

    2018-03-01

    Nowadays, Named Entity Recognition (NER) system is used in medical texts to obtain important medical information, like diseases, symptoms, and drugs. While most NER systems are applied to formal medical texts, informal ones like those from social media (also called semi-formal texts) are starting to get recognition as a gold mine for medical information. We propose a theoretical Named Entity Recognition (NER) model for semi-formal medical texts in our medical knowledge management system by comparing two kinds of word representations: cluster-based word representation and distributed representation.

  14. The role of native-language phonology in the auditory word identification and visual word recognition of Russian-English bilinguals.

    Science.gov (United States)

    Shafiro, Valeriy; Kharkhurin, Anatoliy V

    2009-03-01

    Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic categorization of words containing four phonological vowel contrasts (/i/-/u/,/I/-/A/,/i/-/I/,/epsilon/-/ae/). Experiment 2 assessed auditory identification accuracy of words containing these four contrasts. Both bilingual groups demonstrated reduced accuracy in auditory identification of two English vowel contrasts absent in their native phonology (/i/-/I/,epsilon/-/ae/). For late- bilinguals, auditory identification difficulty was accompanied by poor visual word recognition for one difficult contrast (/i/-/I/). Bilinguals' visual word recognition moderately correlated with their auditory identification of difficult contrasts. These results indicate that native language phonology can play a role in visual processing of second language words. However, this effect may be considerably constrained by orthographic systems of specific languages.

  15. Reading component skills in dyslexia: word recognition, comprehension and processing speed

    Directory of Open Access Journals (Sweden)

    Darlene Godoy Oliveira

    2014-11-01

    Full Text Available The cognitive model of reading comprehension posits that reading comprehension is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the reading comprehension model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years were divided in a Dyslexic Group (DG, 18 children, MA = 10.78, SD = 1.66 and Control Group (CG 22 children, MA = 10.59, SD = 1.86. All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and reading comprehension, word recognition, processing speed, picture naming, receptive vocabulary and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and reading comprehension, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on reading comprehension test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  16. Functional Anatomy of Recognition of Chinese Multi-Character Words: Convergent Evidence from Effects of Transposable Nonwords, Lexicality, and Word Frequency.

    Science.gov (United States)

    Lin, Nan; Yu, Xi; Zhao, Ying; Zhang, Mingxia

    2016-01-01

    This fMRI study aimed to identify the neural mechanisms underlying the recognition of Chinese multi-character words by partialling out the confounding effect of reaction time (RT). For this purpose, a special type of nonword-transposable nonword-was created by reversing the character orders of real words. These nonwords were included in a lexical decision task along with regular (non-transposable) nonwords and real words. Through conjunction analysis on the contrasts of transposable nonwords versus regular nonwords and words versus regular nonwords, the confounding effect of RT was eliminated, and the regions involved in word recognition were reliably identified. The word-frequency effect was also examined in emerged regions to further assess their functional roles in word processing. Results showed significant conjunctional effect and positive word-frequency effect in the bilateral inferior parietal lobules and posterior cingulate cortex, whereas only conjunctional effect was found in the anterior cingulate cortex. The roles of these brain regions in recognition of Chinese multi-character words were discussed.

  17. How a hobby can shape cognition: visual word recognition in competitive Scrabble players.

    Science.gov (United States)

    Hargreaves, Ian S; Pexman, Penny M; Zdrazilova, Lenka; Sargious, Peter

    2012-01-01

    Competitive Scrabble is an activity that involves extraordinary word recognition experience. We investigated whether that experience is associated with exceptional behavior in the laboratory in a classic visual word recognition paradigm: the lexical decision task (LDT). We used a version of the LDT that involved horizontal and vertical presentation and a concreteness manipulation. In Experiment 1, we presented this task to a group of undergraduates, as these participants are the typical sample in word recognition studies. In Experiment 2, we compared the performance of a group of competitive Scrabble players with a group of age-matched nonexpert control participants. The results of a series of cognitive assessments showed that the Scrabble players and control participants differed only in Scrabble-specific skills (e.g., anagramming). Scrabble expertise was associated with two specific effects (as compared to controls): vertical fluency (relatively less difficulty judging lexicality for words presented in the vertical orientation) and semantic deemphasis (smaller concreteness effects for word responses). These results suggest that visual word recognition is shaped by experience, and that with experience there are efficiencies to be had even in the adult word recognition system.

  18. Word Recognition Subcomponents and Passage Level Reading in a Foreign Language

    Science.gov (United States)

    Yamashita, Junko

    2013-01-01

    Despite the growing number of studies highlighting the complex process of acquiring second language (L2) word recognition skills, comparatively little research has examined the relationship between word recognition and passage-level reading ability in L2 learners; further, the existing results are inconclusive. This study aims to help fill the…

  19. The impact of task demand on visual word recognition.

    Science.gov (United States)

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Orthographic consistency affects spoken word recognition at different grain-sizes

    DEFF Research Database (Denmark)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previo...

  1. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  2. Voice reinstatement modulates neural indices of continuous word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Backer, Kristina C; Alain, Claude

    2014-09-01

    The present study was designed to examine listeners' ability to use voice information incidentally during spoken word recognition. We recorded event-related brain potentials (ERPs) during a continuous recognition paradigm in which participants indicated on each trial whether the spoken word was "new" or "old." Old items were presented at 2, 8 or 16 words following the first presentation. Context congruency was manipulated by having the same word repeated by either the same speaker or a different speaker. The different speaker could share the gender, accent or neither feature with the word presented the first time. Participants' accuracy was greatest when the old word was spoken by the same speaker than by a different speaker. In addition, accuracy decreased with increasing lag. The correct identification of old words was accompanied by an enhanced late positivity over parietal sites, with no difference found between voice congruency conditions. In contrast, an earlier voice reinstatement effect was observed over frontal sites, an index of priming that preceded recollection in this task. Our results provide further evidence that acoustic and semantic information are integrated into a unified trace and that acoustic information facilitates spoken word recollection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Specifying theories of developmental dyslexia: a diffusion model analysis of word recognition

    NARCIS (Netherlands)

    Zeguers, M.H.T.; Snellings, P.; Tijms, J.; Weeda, W.D.; Tamboer, P.; Bexkens, A.; Huizenga, H.M.

    2011-01-01

    The nature of word recognition difficulties in developmental dyslexia is still a topic of controversy. We investigated the contribution of phonological processing deficits and uncertainty to the word recognition difficulties of dyslexic children by mathematical diffusion modeling of visual and

  4. English word frequency and recognition in bilinguals: Inter-corpus comparison and error analysis.

    Science.gov (United States)

    Shi, Lu-Feng

    2015-01-01

    This study is the second of a two-part investigation on lexical effects on bilinguals' performance on a clinical English word recognition test. Focus is on word-frequency effects using counts provided by four corpora. Frequency of occurrence was obtained for 200 NU-6 words from the Hoosier mental lexicon (HML) and three contemporary corpora, American National Corpora, Hyperspace analogue to language (HAL), and SUBTLEX(US). Correlation analysis was performed between word frequency and error rate. Ten monolinguals and 30 bilinguals participated. Bilinguals were further grouped according to their age of English acquisition and length of schooling/working in English. Word frequency significantly affected word recognition in bilinguals who acquired English late and had limited schooling/working in English. When making errors, bilinguals tended to replace the target word with a word of a higher frequency. Overall, the newer corpora outperformed the HML in predicting error rate. Frequency counts provided by contemporary corpora predict bilinguals' recognition of English monosyllabic words. Word frequency also helps explain top replacement words for misrecognized targets. Word-frequency effects are especially prominent for bilinguals foreign born and educated.

  5. Braille in the Sighted: Teaching Tactile Reading to Sighted Adults.

    Science.gov (United States)

    Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Hańczur, Paweł; Szwed, Marcin

    2016-01-01

    Blind people are known to have superior perceptual abilities in their remaining senses. Several studies suggest that these enhancements are dependent on the specific experience of blind individuals, who use those remaining senses more than sighted subjects. In line with this view, sighted subjects, when trained, are able to significantly progress in relatively simple tactile tasks. However, the case of complex tactile tasks is less obvious, as some studies suggest that visual deprivation itself could confer large advantages in learning them. It remains unclear to what extent those complex skills, such as braille reading, can be learnt by sighted subjects. Here we enrolled twenty-nine sighted adults, mostly braille teachers and educators, in a 9-month braille reading course. At the beginning of the course, all subjects were naive in tactile braille reading. After the course, almost all were able to read whole braille words at a mean speed of 6 words-per-minute. Subjects with low tactile acuity did not differ significantly in braille reading speed from the rest of the group, indicating that low tactile acuity is not a limiting factor for learning braille, at least at this early stage of learning. Our study shows that most sighted adults can learn whole-word braille reading, given the right method and a considerable amount of motivation. The adult sensorimotor system can thus adapt, to some level, to very complex tactile tasks without visual deprivation. The pace of learning in our group was comparable to congenitally and early blind children learning braille in primary school, which suggests that the blind's mastery of complex tactile tasks can, to a large extent, be explained by experience-dependent mechanisms.

  6. The what, when, where, and how of visual word recognition.

    Science.gov (United States)

    Carreiras, Manuel; Armstrong, Blair C; Perea, Manuel; Frost, Ram

    2014-02-01

    A long-standing debate in reading research is whether printed words are perceived in a feedforward manner on the basis of orthographic information, with other representations such as semantics and phonology activated subsequently, or whether the system is fully interactive and feedback from these representations shapes early visual word recognition. We review recent evidence from behavioral, functional magnetic resonance imaging, electroencephalography, magnetoencephalography, and biologically plausible connectionist modeling approaches, focusing on how each approach provides insight into the temporal flow of information in the lexical system. We conclude that, consistent with interactive accounts, higher-order linguistic representations modulate early orthographic processing. We also discuss how biologically plausible interactive frameworks and coordinated empirical and computational work can advance theories of visual word recognition and other domains (e.g., object recognition). Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. COGNITIVE ANALYSIS OF THE READING IN THE PROCESS RECOGNITION OF WORDS

    Directory of Open Access Journals (Sweden)

    Jussara Oliveira Araújo

    2016-07-01

    Full Text Available The reading is a hard activity to being developed, demanding an extensive learning. On this perspective, the objective is describe and analyze the abilities of recognition of words through of Model of Recognition of the Words, proposed by Ellis (1995. The results could contribute to a more efficient pedagogical practice in the formation of reading competence.

  8. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    Science.gov (United States)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  9. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    Science.gov (United States)

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  10. Conducting spoken word recognition research online: Validation and a new timing method.

    Science.gov (United States)

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  11. Storage and retrieval properties of dual codes for pictures and words in recognition memory.

    Science.gov (United States)

    Snodgrass, J G; McClure, P

    1975-09-01

    Storage and retrieval properties of pictures and words were studied within a recognition memory paradigm. Storage was manipulated by instructing subjects either to image or to verbalize to both picture and word stimuli during the study sequence. Retrieval was manipulated by representing a proportion of the old picture and word items in their opposite form during the recognition test (i.e., some old pictures were tested with their corresponding words and vice versa). Recognition performance for pictures was identical under the two instructional conditions, whereas recognition performance for words was markedly superior under the imagery instruction condition. It was suggested that subjects may engage in dual coding of simple pictures naturally, regardless of instructions, whereas dual coding of words may occur only under imagery instructions. The form of the test item had no effect on recognition performance for either type of stimulus and under either instructional condition. However, change of form of the test item markedly reduced item-by-item correlations between the two instructional conditions. It is tentatively proposed that retrieval is required in recognition, but that the effect of a form change is simply to make the retrieval process less consistent, not less efficient.

  12. Interference of spoken word recognition through phonological priming from visual objects and printed words

    OpenAIRE

    McQueen, J.; Huettig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of pre-exposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g...

  13. Allophones, not phonemes in spoken-word recognition

    NARCIS (Netherlands)

    Mitterer, H.A.; Reinisch, E.; McQueen, J.M.

    2018-01-01

    What are the phonological representations that listeners use to map information about the segmental content of speech onto the mental lexicon during spoken-word recognition? Recent evidence from perceptual-learning paradigms seems to support (context-dependent) allophones as the basic

  14. Impaired Word and Face Recognition in Older Adults with Type 2 Diabetes.

    Science.gov (United States)

    Jones, Nicola; Riby, Leigh M; Smith, Michael A

    2016-07-01

    Older adults with type 2 diabetes mellitus (DM2) exhibit accelerated decline in some domains of cognition including verbal episodic memory. Few studies have investigated the influence of DM2 status in older adults on recognition memory for more complex stimuli such as faces. In the present study we sought to compare recognition memory performance for words, objects and faces under conditions of relatively low and high cognitive load. Healthy older adults with good glucoregulatory control (n = 13) and older adults with DM2 (n = 24) were administered recognition memory tasks in which stimuli (faces, objects and words) were presented under conditions of either i) low (stimulus presented without a background pattern) or ii) high (stimulus presented against a background pattern) cognitive load. In a subsequent recognition phase, the DM2 group recognized fewer faces than healthy controls. Further, the DM2 group exhibited word recognition deficits in the low cognitive load condition. The recognition memory impairment observed in patients with DM2 has clear implications for day-to-day functioning. Although these deficits were not amplified under conditions of increased cognitive load, the present study emphasizes that recognition memory impairment for both words and more complex stimuli such as face are a feature of DM2 in older adults. Copyright © 2016 IMSS. Published by Elsevier Inc. All rights reserved.

  15. FPGA-Based Implementation of Lithuanian Isolated Word Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of Lithuanian isolated word recognition algorithm. FPGA is selected for parallel process implementation using VHDL to ensure fast signal processing at low rate clock signal. Cepstrum analysis was applied to features extraction in voice. The dynamic time warping algorithm was used to compare the vectors of cepstrum coefficients. A library of 100 words features was created and stored in the internal FPGA BRAM memory. Experimental testing with speaker dependent records demonstrated the recognition rate of 94%. The recognition rate of 58% was achieved for speaker-independent records. Calculation of cepstrum coefficients lasted for 8.52 ms at 50 MHz clock, while 100 DTWs took 66.56 ms at 25 MHz clock.Article in Lithuanian

  16. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    Science.gov (United States)

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2012-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…

  17. Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition.

    Science.gov (United States)

    Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia

    2015-09-01

    We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.

  18. SUGGESTIONS FOR DEVELOPING INDEPENDENT WORD ATTACK IN READING, FOR USE IN BASIC INSTITUTE MEETINGS, GRADES THREE AND FOUR.

    Science.gov (United States)

    REECE, THOMAS E.; AND OTHERS

    A GUIDE FOR PLANNING SPECIFIC INSTRUCTION FOR DEVELOPING INDEPENDENT WORD ATTACK PRESENTS THE SKILLS NECESSARY FOR MASTERING SIGHT VOCABULARY, WORD RECOGNITION, AND THE USE OF THE DICTIONARY. SPECIFIC DEFINITIONS OF TERMS AND EXAMPLES OF TEACHING TECHNIQUES WITH THE SEQUENCE OF INSTRUCTION FOR THE DEVELOPMENT OF PHONETIC AND STRUCTURAL ANALYSIS…

  19. Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times.

    Science.gov (United States)

    Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia

    2018-02-12

    Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.

  20. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    OpenAIRE

    Jesse, A.; McQueen, J.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker...

  1. How older adults use cognition in sentence-final word recognition.

    Science.gov (United States)

    Cahana-Amitay, Dalia; Spiro, Avron; Sayers, Jesse T; Oveis, Abigail C; Higby, Eve; Ojo, Emmanuel A; Duncan, Susan; Goral, Mira; Hyun, Jungmoon; Albert, Martin L; Obler, Loraine K

    2016-07-01

    This study examined the effects of executive control and working memory on older adults' sentence-final word recognition. The question we addressed was the importance of executive functions to this process and how it is modulated by the predictability of the speech material. To this end, we tested 173 neurologically intact adult native English speakers aged 55-84 years. Participants were given a sentence-final word recognition test in which sentential context was manipulated and sentences were presented in different levels of babble, and multiple tests of executive functioning assessing inhibition, shifting, and efficient access to long-term memory, as well as working memory. Using a generalized linear mixed model, we found that better inhibition was associated with higher accuracy in word recognition, while increased age and greater hearing loss were associated with poorer performance. Findings are discussed in the framework of semantic control and are interpreted as supporting a theoretical view of executive control which emphasizes functional diversity among executive components.

  2. Procedural Adaptations for Use of Constant Time Delay to Teach Highly Motivating Words to Beginning Braille Readers

    Science.gov (United States)

    Ivy, Sarah E.; Guerra, Jennifer A.; Hatton, Deborah D.

    2017-01-01

    Introduction: Constant time delay is an evidence-based practice to teach sight word recognition to students with a variety of disabilities. To date, two studies have documented its effectiveness for teaching braille. Methods: Using a multiple-baseline design, we evaluated the effectiveness of constant time delay to teach highly motivating words to…

  3. Congruent bodily arousal promotes the constructive recognition of emotional words.

    Science.gov (United States)

    Kever, Anne; Grynberg, Delphine; Vermeulen, Nicolas

    2017-08-01

    Considerable research has shown that bodily states shape affect and cognition. Here, we examined whether transient states of bodily arousal influence the categorization speed of high arousal, low arousal, and neutral words. Participants realized two blocks of a constructive recognition task, once after a cycling session (increased arousal), and once after a relaxation session (reduced arousal). Results revealed overall faster response times for high arousal compared to low arousal words, and for positive compared to negative words. Importantly, low arousal words were categorized significantly faster after the relaxation than after the cycling, suggesting that a decrease in bodily arousal promotes the recognition of stimuli matching one's current arousal state. These findings highlight the importance of the arousal dimension in emotional processing, and suggest the presence of arousal-congruency effects. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders

    DEFF Research Database (Denmark)

    Robotham, Ro J.; Starrfelt, Randi

    2017-01-01

    Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective...... face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been...... also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can...

  5. Chinese Unknown Word Recognition for PCFG-LA Parsing

    Directory of Open Access Journals (Sweden)

    Qiuping Huang

    2014-01-01

    Full Text Available This paper investigates the recognition of unknown words in Chinese parsing. Two methods are proposed to handle this problem. One is the modification of a character-based model. We model the emission probability of an unknown word using the first and last characters in the word. It aims to reduce the POS tag ambiguities of unknown words to improve the parsing performance. In addition, a novel method, using graph-based semisupervised learning (SSL, is proposed to improve the syntax parsing of unknown words. Its goal is to discover additional lexical knowledge from a large amount of unlabeled data to help the syntax parsing. The method is mainly to propagate lexical emission probabilities to unknown words by building the similarity graphs over the words of labeled and unlabeled data. The derived distributions are incorporated into the parsing process. The proposed methods are effective in dealing with the unknown words to improve the parsing. Empirical results for Penn Chinese Treebank and TCT Treebank revealed its effectiveness.

  6. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  7. Clustering of Farsi sub-word images for whole-book recognition

    Science.gov (United States)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-01-01

    Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.

  8. Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.

    Science.gov (United States)

    Hunter, Cynthia R; Pisoni, David B

    Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low

  9. Toddlers' sensitivity to within-word coarticulation during spoken word recognition: Developmental differences in lexical competition.

    Science.gov (United States)

    Zamuner, Tania S; Moore, Charlotte; Desmeules-Trudel, Félix

    2016-12-01

    To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. The word-frequency paradox for recall/recognition occurs for pictures.

    Science.gov (United States)

    Karlsen, Paul Johan; Snodgrass, Joan Gay

    2004-08-01

    A yes-no recognition task and two recall tasks were conducted using pictures of high and low familiarity ratings. Picture familiarity had analogous effects to word frequency, and replicated the word-frequency paradox in recall and recognition. Low-familiarity pictures were more recognizable than high-familiarity pictures, pure lists of high-familiarity pictures were more recallable than pure lists of low-familiarity pictures, and there was no effect of familiarity for mixed lists. These results are consistent with the predictions of the Search of Associative Memory (SAM) model.

  11. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  12. THE INFLUENCE OF SYLLABIFICATION RULES IN L1 ON L2 WORD RECOGNITION.

    Science.gov (United States)

    Choi, Wonil; Nam, Kichun; Lee, Yoonhyoung

    2015-10-01

    Experiments with Korean learners of English and English monolinguals were conducted to examine whether knowledge of syllabification in the native language (Korean) affects the recognition of printed words in the non-native language (English). Another purpose of this study was to test whether syllables are the processing unit in Korean visual word recognition. In Experiment 1, 26 native Korean speakers and 19 native English speakers participated. In Experiment 2, 40 native Korean speakers participated. In two experiments, syllable length was manipulated based on the Korean syllabification rule and the participants performed a lexical decision task. Analyses of variance were performed for the lexical decision latencies and error rates in two experiments. The results from Korean learners of English showed that two-syllable words based on the Korean syllabification rule were recognized faster as words than various types of three-syllable words, suggesting that Korean learners of English exploited their L1 phonological knowledge in recognizing English words. The results of the current study also support the idea that syllables are a processing unit of Korean visual word recognition.

  13. RECOGNITION METHOD FOR CURSIVE JAPANESE WORD WRITTEN IN LATIN CHARACTERS

    NARCIS (Netherlands)

    Maruyama, K.; Nakano, Y.

    2004-01-01

    This paper proposes a recognition method for cursive Japanese words written in Latin characters. The method integrates multiple classifiers using duplicated can­ didates in multiple classifiers and orders of classifiers to improve the word recog­ nition rate combining their results. In experiments

  14. Consonant/vowel asymmetry in early word form recognition.

    Science.gov (United States)

    Poltrock, Silvana; Nazzi, Thierry

    2015-03-01

    Previous preferential listening studies suggest that 11-month-olds' early word representations are phonologically detailed, such that minor phonetic variations (i.e., mispronunciations) impair recognition. However, these studies focused on infants' sensitivity to mispronunciations (or omissions) of consonants, which have been proposed to be more important for lexical identity than vowels. Even though a lexically related consonant advantage has been consistently found in French from 14 months of age onward, little is known about its developmental onset. The current study asked whether French-learning 11-month-olds exhibit a consonant-vowel asymmetry when recognizing familiar words, which would be reflected in vowel mispronunciations being more tolerated than consonant mispronunciations. In a baseline experiment (Experiment 1), infants preferred listening to familiar words over nonwords, confirming that at 11 months of age infants show a familiarity effect rather than a novelty effect. In Experiment 2, which was constructed using the familiar words of Experiment 1, infants preferred listening to one-feature vowel mispronunciations over one-feature consonant mispronunciations. Given the familiarity preference established in Experiment 1, this pattern of results suggests that recognition of early familiar words is more dependent on their consonants than on their vowels. This adds another piece of evidence that, at least in French, consonants already have a privileged role in lexical processing by 11 months of age, as claimed by Nespor, Peña, and Mehler (2003). Copyright © 2014 Elsevier Inc. All rights reserved.

  15. An ERP assessment of hemispheric projections in foveal and extrafoveal word recognition.

    Directory of Open Access Journals (Sweden)

    Timothy R Jordan

    Full Text Available BACKGROUND: The existence and function of unilateral hemispheric projections within foveal vision may substantially affect foveal word recognition. The purpose of this research was to reveal these projections and determine their functionality. METHODOLOGY: Single words (and pseudowords were presented to the left or right of fixation, entirely within either foveal or extrafoveal vision. To maximize the likelihood of unilateral projections for foveal displays, stimuli in foveal vision were presented away from the midline. The processing of stimuli in each location was assessed by combining behavioural measures (reaction times, accuracy with on-line monitoring of hemispheric activity using event-related potentials recorded over each hemisphere, and carefully-controlled presentation procedures using an eye-tracker linked to a fixation-contingent display. PRINCIPAL FINDINGS: Event-related potentials 100-150 ms and 150-200 ms after stimulus onset indicated that stimuli in extrafoveal and foveal locations were projected unilaterally to the hemisphere contralateral to the presentation hemifield with no concurrent projection to the ipsilateral hemisphere. These effects were similar for words and pseudowords, suggesting this early division occurred before word recognition. Indeed, event-related potentials revealed differences between words and pseudowords 300-350 ms after stimulus onset, for foveal and extrafoveal locations, indicating that word recognition had now occurred. However, these later event-related potentials also revealed that the hemispheric division observed previously was no longer present for foveal locations but remained for extrafoveal locations. These findings closely matched the behavioural finding that foveal locations produced similar performance each side of fixation but extrafoveal locations produced left-right asymmetries. CONCLUSIONS: These findings indicate that an initial division in unilateral hemispheric projections occurs in

  16. An ERP Assessment of Hemispheric Projections in Foveal and Extrafoveal Word Recognition

    Science.gov (United States)

    Jordan, Timothy R.; Fuggetta, Giorgio; Paterson, Kevin B.; Kurtev, Stoyan; Xu, Mengyun

    2011-01-01

    Background The existence and function of unilateral hemispheric projections within foveal vision may substantially affect foveal word recognition. The purpose of this research was to reveal these projections and determine their functionality. Methodology Single words (and pseudowords) were presented to the left or right of fixation, entirely within either foveal or extrafoveal vision. To maximize the likelihood of unilateral projections for foveal displays, stimuli in foveal vision were presented away from the midline. The processing of stimuli in each location was assessed by combining behavioural measures (reaction times, accuracy) with on-line monitoring of hemispheric activity using event-related potentials recorded over each hemisphere, and carefully-controlled presentation procedures using an eye-tracker linked to a fixation-contingent display. Principal Findings Event-related potentials 100–150 ms and 150–200 ms after stimulus onset indicated that stimuli in extrafoveal and foveal locations were projected unilaterally to the hemisphere contralateral to the presentation hemifield with no concurrent projection to the ipsilateral hemisphere. These effects were similar for words and pseudowords, suggesting this early division occurred before word recognition. Indeed, event-related potentials revealed differences between words and pseudowords 300–350 ms after stimulus onset, for foveal and extrafoveal locations, indicating that word recognition had now occurred. However, these later event-related potentials also revealed that the hemispheric division observed previously was no longer present for foveal locations but remained for extrafoveal locations. These findings closely matched the behavioural finding that foveal locations produced similar performance each side of fixation but extrafoveal locations produced left-right asymmetries. Conclusions These findings indicate that an initial division in unilateral hemispheric projections occurs in foveal vision

  17. Age of Acquisition and Sensitivity to Gender in Spanish Word Recognition

    Science.gov (United States)

    Foote, Rebecca

    2014-01-01

    Speakers of gender-agreement languages use gender-marked elements of the noun phrase in spoken-word recognition: A congruent marking on a determiner or adjective facilitates the recognition of a subsequent noun, while an incongruent marking inhibits its recognition. However, while monolinguals and early language learners evidence this…

  18. The Influence of Phonotactic Probability on Word Recognition in Toddlers

    Science.gov (United States)

    MacRoy-Higgins, Michelle; Shafer, Valerie L.; Schwartz, Richard G.; Marton, Klara

    2014-01-01

    This study examined the influence of phonotactic probability on word recognition in English-speaking toddlers. Typically developing toddlers completed a preferential looking paradigm using familiar words, which consisted of either high or low phonotactic probability sound sequences. The participants' looking behavior was recorded in response to…

  19. Deep generative learning of location-invariant visual word recognition.

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective

  20. Semantic Ambiguity Effects in L2 Word Recognition.

    Science.gov (United States)

    Ishida, Tomomi

    2018-06-01

    The present study examined the ambiguity effects in second language (L2) word recognition. Previous studies on first language (L1) lexical processing have observed that ambiguous words are recognized faster and more accurately than unambiguous words on lexical decision tasks. In this research, L1 and L2 speakers of English were asked whether a letter string on a computer screen was an English word or not. An ambiguity advantage was found for both groups and greater ambiguity effects were found for the non-native speaker group when compared to the native speaker group. The findings imply that the larger ambiguity advantage for L2 processing is due to their slower response time in producing adequate feedback activation from the semantic level to the orthographic level.

  1. Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.

    Science.gov (United States)

    Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric

    2013-01-04

    It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.

  2. Concurrent Correlates of Chinese Word Recognition in Deaf and Hard-of-Hearing Children

    Science.gov (United States)

    Ching, Boby Ho-Hong; Nunes, Terezinha

    2015-01-01

    The aim of this study was to explore the relative contributions of phonological, semantic radical, and morphological awareness to Chinese word recognition in deaf and hard-of-hearing (DHH) children. Measures of word recognition, general intelligence, phonological, semantic radical, and morphological awareness were administered to 32 DHH and 35…

  3. Phonological processing skills in 6 year old blind and sighted Persian speakers

    Directory of Open Access Journals (Sweden)

    Maryam Sadat Momen Vaghefi

    2013-03-01

    Full Text Available Background and Aim: Phonological processing skills include the abilities to restore, retrieve and use memorized phonological codes. The purpose of this research is to compare and evaluate phonological processing skills in 6-7 year old blind and sighted Persian speakers in Tehran, Iran.Methods: This research is an analysis-comparison study. The subjects were 24 blind and 24 sighted children. The evaluation test of reading and writing disorders in primary school students, linguistic and cognitive abilities test, and the naming subtest of the aphasia evaluation test were used as research tools.Results: Sighted children were found to perform better on phoneme recognition of nonwords and flower naming subtests; and the difference was significant (p<0.001. Blind children performed better in words and sentence memory; the difference was significant (p<0.001. There were no significant differences in other subtests.Conclusion: Blind children's better performance in memory tasks is due to the fact that they have powerful auditory memory.

  4. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    Science.gov (United States)

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  5. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Science.gov (United States)

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  6. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  7. Accentuate or repeat? Brain signatures of developmental periods in infant word recognition.

    Science.gov (United States)

    Männel, Claudia; Friederici, Angela D

    2013-01-01

    Language acquisition has long been discussed as an interaction between biological preconditions and environmental input. This general interaction seems particularly salient in lexical acquisition, where infants are already able to detect unknown words in sentences at 7 months of age, guided by phonological and statistical information in the speech input. While this information results from the linguistic structure of a given language, infants also exploit situational information, such as speakers' additional word accentuation and word repetition. The current study investigated the developmental trajectory of infants' sensitivity to these two situational input cues in word recognition. Testing infants at 6, 9, and 12 months of age, we hypothesized that different age groups are differentially sensitive to accentuation and repetition. In a familiarization-test paradigm, event-related brain potentials (ERPs) revealed age-related differences in infants' word recognition as a function of situational input cues: at 6 months infants only recognized previously accentuated words, at 9 months both accentuation and repetition played a role, while at 12 months only repetition was effective. These developmental changes are suggested to result from infants' advancing linguistic experience and parallel auditory cortex maturation. Our data indicate very narrow and specific input-sensitive periods in infant word recognition, with accentuation being effective prior to repetition. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. The role of backward associative strength in false recognition of DRM lists with multiple critical words.

    Science.gov (United States)

    Beato, María S; Arndt, Jason

    2017-08-01

    Memory is a reconstruction of the past and is prone to errors. One of the most widely-used paradigms to examine false memory is the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, participants studied words associatively related to a non-presented critical word. In a subsequent memory test critical words are often falsely recalled and/or recognized. In the present study, we examined the influence of backward associative strength (BAS) on false recognition using DRM lists with multiple critical words. In forty-eight English DRM lists, we manipulated BAS while controlling forward associative strength (FAS). Lists included four words (e.g., prison, convict, suspect, fugitive) simultaneously associated with two critical words (e.g., CRIMINAL, JAIL). The results indicated that true recognition was similar in high-BAS and low-BAS lists, while false recognition was greater in high-BAS lists than in low-BAS lists. Furthermore, there was a positive correlation between false recognition and the probability of a resonant connection between the studied words and their associates. These findings suggest that BAS and resonant connections influence false recognition, and extend prior research using DRM lists associated with a single critical word to studies of DRM lists associated with multiple critical words.

  9. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words

    Science.gov (United States)

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Fitzgibbons, Peter J.; Cohen, Julie I.

    2015-01-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech. PMID:25698021

  10. Stimulus-independent semantic bias misdirects word recognition in older adults.

    Science.gov (United States)

    Rogers, Chad S; Wingfield, Arthur

    2015-07-01

    Older adults' normally adaptive use of semantic context to aid in word recognition can have a negative consequence of causing misrecognitions, especially when the word actually spoken sounds similar to a word that more closely fits the context. Word-pairs were presented to young and older adults, with the second word of the pair masked by multi-talker babble varying in signal-to-noise ratio. Results confirmed older adults' greater tendency to misidentify words based on their semantic context compared to the young adults, and to do so with a higher level of confidence. This age difference was unaffected by differences in the relative level of acoustic masking.

  11. Deep generative learning of location-invariant visual word recognition

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the model's learning objective

  12. Deep generative learning of location-invariant visual word recognition

    Directory of Open Access Journals (Sweden)

    Maria Grazia eDi Bono

    2013-09-01

    Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.

  13. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  14. Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants.

    Science.gov (United States)

    Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur

    The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a

  15. An ERP investigation of visual word recognition in syllabary scripts.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2013-06-01

    The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.

  16. Connected word recognition using a cascaded neuro-computational model

    Science.gov (United States)

    Hoya, Tetsuya; van Leeuwen, Cees

    2016-10-01

    We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.

  17. An fMRI study of concreteness effects in spoken word recognition.

    Science.gov (United States)

    Roxbury, Tracy; McMahon, Katie; Copland, David A

    2014-09-30

    Evidence for the brain mechanisms recruited when processing concrete versus abstract concepts has been largely derived from studies employing visual stimuli. The tasks and baseline contrasts used have also involved varying degrees of lexical processing. This study investigated the neural basis of the concreteness effect during spoken word recognition and employed a lexical decision task with a novel pseudoword condition. The participants were seventeen healthy young adults (9 females). The stimuli consisted of (a) concrete, high imageability nouns, (b) abstract, low imageability nouns and (c) opaque legal pseudowords presented in a pseudorandomised, event-related design. Activation for the concrete, abstract and pseudoword conditions was analysed using anatomical regions of interest derived from previous findings of concrete and abstract word processing. Behaviourally, lexical decision reaction times for the concrete condition were significantly faster than both abstract and pseudoword conditions and the abstract condition was significantly faster than the pseudoword condition (p word recognition. Significant activity was also elicited by concrete words relative to pseudowords in the left fusiform and left anterior middle temporal gyrus. These findings confirm the involvement of a widely distributed network of brain regions that are activated in response to the spoken recognition of concrete but not abstract words. Our findings are consistent with the proposal that distinct brain regions are engaged as convergence zones and enable the binding of supramodal input.

  18. Phonological Awareness and Naming Speed in the Prediction of Dutch Children's Word Recognition

    Science.gov (United States)

    Verhagen, W.; Aarnoutse, C.; van Leeuwe, J.

    2008-01-01

    Influences of phonological awareness and naming speed on the speed and accuracy of Dutch children's word recognition were investigated in a longitudinal study. The speed and accuracy of word recognition at the ends of Grades 1 and 2 were predicted by naming speed from both the beginning and end of Grade 1, after control for autoregressive…

  19. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Directory of Open Access Journals (Sweden)

    Jiangyi Qin

    Full Text Available A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  20. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    Science.gov (United States)

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  1. Levels-of-processing effect on word recognition in schizophrenia.

    Science.gov (United States)

    Ragland, J Daniel; Moelter, Stephen T; McGrath, Claire; Hill, S Kristian; Gur, Raquel E; Bilker, Warren B; Siegel, Steven J; Gur, Ruben C

    2003-12-01

    Individuals with schizophrenia have difficulty organizing words semantically to facilitate encoding. This is commonly attributed to organizational rather than semantic processing limitations. By requiring participants to classify and encode words on either a shallow (e.g., uppercase/lowercase) or deep level (e.g., concrete/abstract), the levels-of-processing paradigm eliminates the need to generate organizational strategies. This paradigm was administered to 30 patients with schizophrenia and 30 healthy comparison subjects to test whether providing a strategy would improve patient performance. Word classification during shallow and deep encoding was slower and less accurate in patients. Patients also responded slowly during recognition testing and maintained a more conservative response bias following deep encoding; however, both groups showed a robust levels-of-processing effect on recognition accuracy, with unimpaired patient performance following both shallow and deep encoding. This normal levels-of-processing effect in the patient sample suggests that semantic processing is sufficiently intact for patients to benefit from organizational cues. Memory remediation efforts may therefore be most successful if they focus on teaching patients to form organizational strategies during initial encoding.

  2. Surviving Blind Decomposition: A Distributional Analysis of the Time-Course of Complex Word Recognition

    Science.gov (United States)

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-01-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…

  3. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    Science.gov (United States)

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  4. Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition

    Science.gov (United States)

    Yap, Melvin J.; Balota, David A.

    2007-01-01

    Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…

  5. A connectionist model for the simulation of human spoken-word recognition

    NARCIS (Netherlands)

    Kuijk, D.J. van; Wittenburg, P.; Dijkstra, A.F.J.; Den Brinker, B.P.L.M.; Beek, P.J.; Brand, A.N.; Maarse, F.J.; Mulder, L.J.M.

    1999-01-01

    A new psycholinguistically motivated and neural network base model of human word recognition is presented. In contrast to earlier models it uses real speech as input. At the word layer acoustical and temporal information is stored by sequences of connected sensory neurons that pass on sensor

  6. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. The time course of lexical competition during spoken word recognition in Mandarin Chinese: an event-related potential study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen

    2016-01-20

    The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.

  8. Two-year-olds' sensitivity to subphonemic mismatch during online spoken word recognition.

    Science.gov (United States)

    Paquette-Smith, Melissa; Fecher, Natalie; Johnson, Elizabeth K

    2016-11-01

    Sensitivity to noncontrastive subphonemic detail plays an important role in adult speech processing, but little is known about children's use of this information during online word recognition. In two eye-tracking experiments, we investigate 2-year-olds' sensitivity to a specific type of subphonemic detail: coarticulatory mismatch. In Experiment 1, toddlers viewed images of familiar objects (e.g., a boat and a book) while hearing labels containing appropriate or inappropriate coarticulation. Inappropriate coarticulation was created by cross-splicing the coda of the target word onto the onset of another word that shared the same onset and nucleus (e.g., to create boat, the final consonant of boat was cross-spliced onto the initial CV of bone). We tested 24-month-olds and 29-month-olds in this paradigm. Both age groups behaved similarly, readily detecting the inappropriate coarticulation (i.e., showing better recognition of identity-spliced than cross-spliced items). In Experiment 2, we asked how children's sensitivity to subphonemic mismatch compared to their sensitivity to phonemic mismatch. Twenty-nine-month-olds were presented with targets that contained either a phonemic (e.g., the final consonant of boat was spliced onto the initial CV of bait) or a subphonemic mismatch (e.g., the final consonant of boat was spliced onto the initial CV of bone). Here, the subphonemic (coarticulatory) mismatch was not nearly as disruptive to children's word recognition as a phonemic mismatch. Taken together, our findings support the view that 2-year-olds, like adults, use subphonemic information to optimize online word recognition.

  9. Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders.

    Science.gov (United States)

    Robotham, Ro J; Starrfelt, Randi

    2017-01-01

    Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.

  10. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-05-16

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements.

  11. The Predictive Power of Phonemic Awareness and Naming Speed for Early Dutch Word Recognition

    Science.gov (United States)

    Verhagen, Wim G. M.; Aarnoutse, Cor A. J.; van Leeuwe, Jan F. J.

    2009-01-01

    Effects of phonemic awareness and naming speed on the speed and accuracy of Dutch children's word recognition were investigated in a longitudinal study. Both the speed and accuracy of word recognition at the end of Grade 2 were predicted by naming speed from both kindergarten and Grade 1, after control for autoregressive relations, kindergarten…

  12. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  13. Cross-modal working memory binding and word recognition skills: how specific is the link?

    Science.gov (United States)

    Wang, Shinmin; Allen, Richard J

    2018-04-01

    Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.

  14. Interaction in Spoken Word Recognition Models: Feedback Helps

    Science.gov (United States)

    Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593

  15. Interaction in Spoken Word Recognition Models: Feedback Helps.

    Science.gov (United States)

    Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  16. Interaction in Spoken Word Recognition Models: Feedback Helps

    Directory of Open Access Journals (Sweden)

    James S. Magnuson

    2018-04-01

    Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  17. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  18. The Influence of Orthographic Neighborhood Density and Word Frequency on Visual Word Recognition: Insights from RT Distributional Analyses

    Directory of Open Access Journals (Sweden)

    Stephen Wee Hun eLim

    2016-03-01

    Full Text Available The effects of orthographic neighborhood density and word frequency in visual word recognition were investigated using distributional analyses of response latencies in visual lexical decision. Main effects of density and frequency were observed in mean latencies. Distributional analyses, in addition, revealed a density x frequency interaction: for low-frequency words, density effects were mediated predominantly by distributional shifting whereas for high-frequency words, density effects were absent except at the slower RTs, implicating distributional skewing. The present findings suggest that density effects in low-frequency words reflect processes involved in early lexical access, while the effects observed in high-frequency words reflect late postlexical checking processes.

  19. The effect of font size and type on reading performance with Arabic words in normally sighted and simulated cataract subjects.

    Science.gov (United States)

    Alotaibi, Abdullah Z

    2007-05-01

    Previous investigations have shown that reading is the most common functional problem reported by patients at a low vision practice. While there have been studies investigating effect of fonts in normal and low vision patients in English, no study has been carried out in Arabic. Additionally, there has been no investigation into the use of optimum print sizes or fonts that should be used in Arabic books and leaflets for low vision patients. Arabic sentences were read by 100 normally sighted volunteers with and without simulated cataract. Subjects read two font types (Times New Roman and Courier) in three different sizes (N8, N10 and N12). The subjects were asked to read the sentences aloud. The reading speed was calculated as number of words read divided by the time taken, while reading rate was calculated as the number of words read correctly divided by the time taken. There was an improvement in reading performance of normally sighted and simulated visually impaired subjects when the print size increased. There was no significant difference in reading performance between the two types of font used at small print size, however the reading rate improved as print size increased with Times New Roman. The results suggest that the use of N12 print in Times New Roman enhanced reading performance in normally sighted and simulated cataract subjects.

  20. Tracking the time course of word-frequency effects in auditory word recognition with event-related potentials.

    Science.gov (United States)

    Dufour, Sophie; Brunellière, Angèle; Frauenfelder, Ulrich H

    2013-04-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to reflect mechanisms involved in word identification, was also examined. The ERP data showed a clear frequency effect as early as 350 ms from word onset on the P350, followed by a later effect at word offset on the late N400. A neighborhood density effect was also found at an early stage of spoken-word processing on the PMN, and at word offset on the late N400. Overall, our ERP differences for word frequency suggest that frequency affects the core processes of word identification starting from the initial phase of lexical activation and including target word selection. They thus rule out any interpretation of the word frequency effect that is limited to a purely decisional locus after word identification has been completed. Copyright © 2012 Cognitive Science Society, Inc.

  1. Perception and recognition memory of words and werds: two-way mirror effects.

    Science.gov (United States)

    Becker, D Vaughn; Goldinger, Stephen D; Stone, Gregory O

    2006-10-01

    We examined associative priming of words (e.g., TOAD) and pseudohomophones of those words (e.g., TODE) in lexical decision. In addition to word frequency effects, reliable base-word frequency effects were observed for pseudohomophones: Those based on high-frequency words elicited faster and more accurate correct rejections. Associative priming had disparate effects on high- and low-frequency items. Whereas priming improved performance to high-frequency pseudohomophones, it impaired performance to low-frequency pseudohomophones. The results suggested a resonance process, wherein phonologic identity and semantic priming combine to undermine the veridical perception of infrequent items. We tested this hypothesis in another experiment by administering a surprise recognition memory test after lexical decision. When asked to identify words that were spelled correctly during lexical decision, the participants often misremembered pseudohomophones as correctly spelled items. Patterns of false memory, however, were jointly affected by base-word frequencies and their original responses during lexical decision. Taken together, the results are consistent with resonance accounts of word recognition, wherein bottom-up and top-down information sources coalesce into correct, and sometimes illusory, perception. The results are also consistent with a recent lexical decision model, REM-LD, that emphasizes memory retrieval and top-down matching processes in lexical decision.

  2. Severe difficulties with word recognition in noise after platinum chemotherapy in childhood, and improvements with open-fitting hearing-aids.

    Science.gov (United States)

    Einarsson, Einar-Jón; Petersen, Hannes; Wiebe, Thomas; Fransson, Per-Anders; Magnusson, Måns; Moëll, Christian

    2011-10-01

    To investigate word recognition in noise in subjects treated in childhood with chemotherapy, study benefits of open-fitting hearing-aids for word recognition, and investigate whether self-reported hearing-handicap corresponded to subjects' word recognition ability. Subjects diagnosed with cancer and treated with platinum-based chemotherapy in childhood underwent audiometric evaluations. Fifteen subjects (eight females and seven males) fulfilled the criteria set for the study, and four of those received customized open-fitting hearing-aids. Subjects with cisplatin-induced ototoxicity had severe difficulties recognizing words in noise, and scored as low as 54% below reference scores standardized for age and degree of hearing loss. Hearing-impaired subjects' self-reported hearing-handicap correlated significantly with word recognition in a quiet environment but not in noise. Word recognition in noise improved markedly (up to 46%) with hearing-aids, and the self-reported hearing-handicap and disability score were reduced by more than 50%. This study demonstrates the importance of testing word recognition in noise in subjects treated with platinum-based chemotherapy in childhood, and to use specific custom-made questionnaires to evaluate the experienced hearing-handicap. Open-fitting hearing-aids are a good alternative for subjects suffering from poor word recognition in noise.

  3. Functions of graphemic and phonemic codes in visual word-recognition.

    Science.gov (United States)

    Meyer, D E; Schvaneveldt, R W; Ruddy, M G

    1974-03-01

    Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.

  4. Electrophysiological assessment of the time course of bilingual visual word recognition: Early access to language membership.

    Science.gov (United States)

    Yiu, Loretta K; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2015-08-01

    Previous research examining the time course of lexical access during word recognition suggests that phonological processing precedes access to semantic information, which in turn precedes access to syntactic information. Bilingual word recognition likely requires an additional level: knowledge of which language a specific word belongs to. Using the recording of event-related potentials, we investigated the time course of access to language membership information relative to semantic (Experiment 1) and syntactic (Experiment 2) encoding during visual word recognition. In Experiment 1, Spanish-English bilinguals viewed a series of printed words while making dual-choice go/nogo and left/right hand decisions based on semantic (whether the word referred to an animal or an object) and language membership information (whether the word was in English or in Spanish). Experiment 2 used a similar paradigm but with syntactic information (whether the word was a noun or a verb) as one of the response contingencies. The onset and peak latency of the N200, a component related to response inhibition, indicated that language information is accessed earlier than semantic information. Similarly, language information was also accessed earlier than syntactic information (but only based on peak latency). We discuss these findings with respect to models of bilingual word recognition and language comprehension in general. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. The impact of left and right intracranial tumors on picture and word recognition memory.

    Science.gov (United States)

    Goldstein, Bram; Armstrong, Carol L; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V

    2004-02-01

    This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH patient group obtained a significantly slower mean picture recognition reaction time than the RH group. The LH group had a higher proportion of tumors extending into the temporal lobes, possibly accounting for their greater pictorial processing impairments. Dual coding and enhanced visual imagery may have contributed to the patient groups' similar performance on the remainder of the measures.

  6. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    NARCIS (Netherlands)

    Jesse, A.; McQueen, J.M.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes

  7. Morphing Images: A Potential Tool for Teaching Word Recognition to Children with Severe Learning Difficulties

    Science.gov (United States)

    Sheehy, Kieron

    2005-01-01

    Children with severe learning difficulties who fail to begin word recognition can learn to recognise pictures and symbols relatively easily. However, finding an effective means of using pictures to teach word recognition has proved problematic. This research explores the use of morphing software to support the transition from picture to word…

  8. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  9. No strong evidence for lateralisation of word reading and face recognition deficits following posterior brain injury

    DEFF Research Database (Denmark)

    Gerlach, Christian; Marstrand, Lisbet; Starrfelt, Randi

    2014-01-01

    Face recognition and word reading are thought to be mediated by relatively independent cognitive systems lateralized to the right and left hemisphere respectively. In this case, we should expect a higher incidence of face recognition problems in patients with right hemisphere injury and a higher......-construction, motion perception), we found that both patient groups performed significantly worse than a matched control group. In particular we found a significant number of face recognition deficits in patients with left hemisphere injury and a significant number of patients with word reading deficits following...... right hemisphere injury. This suggests that face recognition and word reading may be mediated by more bilaterally distributed neural systems than is commonly assumed....

  10. Distributional structure in language: contributions to noun-verb difficulty differences in infant word recognition.

    Science.gov (United States)

    Willits, Jon A; Seidenberg, Mark S; Saffran, Jenny R

    2014-09-01

    What makes some words easy for infants to recognize, and other words difficult? We addressed this issue in the context of prior results suggesting that infants have difficulty recognizing verbs relative to nouns. In this work, we highlight the role played by the distributional contexts in which nouns and verbs occur. Distributional statistics predict that English nouns should generally be easier to recognize than verbs in fluent speech. However, there are situations in which distributional statistics provide similar support for verbs. The statistics for verbs that occur with the English morpheme -ing, for example, should facilitate verb recognition. In two experiments with 7.5- and 9.5-month-old infants, we tested the importance of distributional statistics for word recognition by varying the frequency of the contextual frames in which verbs occur. The results support the conclusion that distributional statistics are utilized by infant language learners and contribute to noun-verb differences in word recognition. Copyright © 2014. Published by Elsevier B.V.

  11. Working memory affects older adults' use of context in spoken-word recognition.

    Science.gov (United States)

    Janse, Esther; Jesse, Alexandra

    2014-01-01

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

  12. Distinguishing familiarity from fluency for the compound word pair effect in associative recognition.

    Science.gov (United States)

    Ahmad, Fahad N; Hockley, William E

    2017-09-01

    We examined whether processing fluency contributes to associative recognition of unitized pre-experimental associations. In Experiments 1A and 1B, we minimized perceptual fluency by presenting each word of pairs on separate screens at both study and test, yet the compound word (CW) effect (i.e., hit and false-alarm rates greater for CW pairs with no difference in discrimination) did not reduce. In Experiments 2A and 2B, conceptual fluency was examined by comparing transparent (e.g., hand bag) and opaque (e.g., rag time) CW pairs in lexical decision and associative recognition tasks. Lexical decision was faster for transparent CWs (Experiment 2A) but in associative recognition, the CW effect did not differ by CW pair type (Experiment 2B). In Experiments 3A and 3B, we examined whether priming that increases processing fluency would influence the CW effect. In Experiment 3A, CW and non-compound word pairs were preceded with matched and mismatched primes at test in an associative recognition task. In Experiment 3B, only transparent and opaque CW pairs were presented. Results showed that presenting matched versus mismatched primes at test did not influence the CW effect. The CW effect in yes-no associative recognition is due to reliance on enhanced familiarity of unitized CW pairs.

  13. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    Science.gov (United States)

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  15. Learning to Read Words: Theory, Findings, and Issues

    Science.gov (United States)

    Ehri, Linnea C.

    2005-01-01

    Reading words may take several forms. Readers may utilize decoding, analogizing, or predicting to read unfamiliar words. Readers read familiar words by accessing them in memory, called sight word reading. With practice, all words come to be read automatically by sight, which is the most efficient, unobtrusive way to read words in text. The process…

  16. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  17. Modeling code-interactions in bilingual word recognition: Recent empirical studies and simulations with BIA+

    NARCIS (Netherlands)

    Lam, K.J.Y.; Dijkstra, A.F.J.

    2010-01-01

    Daily conversations contain many repetitions of identical and similar word forms. For bilinguals, the words can even come from the same or different languages. How do such repetitions affect the human word recognition system? The Bilingual Interactive Activation Plus (BIA+) model provides a

  18. Reading front to back: MEG evidence for early feedback effects during word recognition.

    Science.gov (United States)

    Woodhead, Z V J; Barnes, G R; Penny, W; Moran, R; Teki, S; Price, C J; Leff, A P

    2014-03-01

    Magnetoencephalography studies in humans have shown word-selective activity in the left inferior frontal gyrus (IFG) approximately 130 ms after word presentation ( Pammer et al. 2004; Cornelissen et al. 2009; Wheat et al. 2010). The role of this early frontal response is currently not known. We tested the hypothesis that the IFG provides top-down constraints on word recognition using dynamic causal modeling of magnetoencephalography data collected, while subjects viewed written words and false font stimuli. Subject-specific dipoles in left and right occipital, ventral occipitotemporal and frontal cortices were identified using Variational Bayesian Equivalent Current Dipole source reconstruction. A connectivity analysis tested how words and false font stimuli differentially modulated activity between these regions within the first 300 ms after stimulus presentation. We found that left inferior frontal activity showed stronger sensitivity to words than false font and a stronger feedback connection onto the left ventral occipitotemporal cortex (vOT) in the first 200 ms. Subsequently, the effect of words relative to false font was observed on feedforward connections from left occipital to ventral occipitotemporal and frontal regions. These findings demonstrate that left inferior frontal activity modulates vOT in the early stages of word processing and provides a mechanistic account of top-down effects during word recognition.

  19. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    Science.gov (United States)

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  20. Reevaluating split-fovea processing in word recognition: hemispheric dominance, retinal location, and the word-nonword effect.

    Science.gov (United States)

    Jordan, Timothy R; Paterson, Kevin B; Kurtev, Stoyan

    2009-03-01

    Many studies have claimed that hemispheric projections are split precisely at the foveal midline and so hemispheric asymmetry affects word recognition right up to the point of fixation. To investigate this claim, four-letter words and nonwords were presented to the left or right of fixation, either close to fixation in foveal vision or farther from fixation in extrafoveal vision. Presentation accuracy was controlled using an eyetracker linked to a fixation-contingent display. Words presented foveally produced identical performance on each side of fixation, but words presented extrafoveally showed a clear left-hemisphere (LH) advantage. Nonwords produced no evidence of hemispheric asymmetry in any location. Foveal stimuli also produced an identical word-nonword effect on each side of fixation, whereas extrafoveal stimuli produced a word-nonword effect only for LH (not right-hemisphere) displays. These findings indicate that functional unilateral projections to contralateral hemispheres exist in extrafoveal locations but provide no evidence of a functional division in hemispheric processing at fixation.

  1. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  2. Coordination of Word Recognition and Oculomotor Control During Reading: The Role of Implicit Lexical Decisions

    Science.gov (United States)

    Choi, Wonil; Gordon, Peter C.

    2013-01-01

    The coordination of word-recognition and oculomotor processes during reading was evaluated in two eye-tracking experiments that examined how word skipping, where a word is not fixated during first-pass reading, is affected by the lexical status of a letter string in the parafovea and ease of recognizing that string. Ease of lexical recognition was manipulated through target-word frequency (Experiment 1) and through repetition priming between prime-target pairs embedded in a sentence (Experiment 2). Using the gaze-contingent boundary technique the target word appeared in the parafovea either with full preview or with transposed-letter (TL) preview. The TL preview strings were nonwords in Experiment 1 (e.g., bilnk created from the target blink), but were words in Experiment 2 (e.g., sacred created from the target scared). Experiment 1 showed greater skipping for high-frequency than low-frequency target words in the full preview condition but not in the TL preview (nonword) condition. Experiment 2 showed greater skipping for target words that repeated an earlier prime word than for those that did not, with this repetition priming occurring both with preview of the full target and with preview of the target’s TL neighbor word. However, time to progress from the word after the target was greater following skips of the TL preview word, whose meaning was anomalous in the sentence context, than following skips of the full preview word whose meaning fit sensibly into the sentence context. Together, the results support the idea that coordination between word-recognition and oculomotor processes occurs at the level of implicit lexical decisions. PMID:23106372

  3. The Effects of Video Self-Modeling on the Decoding Skills of Children At Risk for Reading Disabilities

    OpenAIRE

    Ayala, Sandra M

    2010-01-01

    Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum instruction. Individual videos were recorded and edited to show students successfully and accurately decoding words and practicing sight word recognition. Each...

  4. A familiar font drives early emotional effects in word recognition.

    Science.gov (United States)

    Kuchinke, Lars; Krause, Beatrix; Fritsch, Nathalie; Briesemeister, Benny B

    2014-10-01

    The emotional connotation of a word is known to shift the process of word recognition. Using the electroencephalographic event-related potentials (ERPs) approach it has been documented that early attentional processing of high-arousing negative words is shifted at a stage of processing where a presented word cannot have been fully identified. Contextual learning has been discussed to contribute to these effects. The present study shows that a manipulation of the familiarity with a word's shape interferes with these earliest emotional ERP effects. Presenting high-arousing negative and neutral words in a familiar or an unfamiliar font results in very early emotion differences only in case of familiar shapes, whereas later processing stages reveal similar emotional effects in both font conditions. Because these early emotion-related differences predict later behavioral differences, it is suggested that contextual learning of emotional valence comprises more visual features than previously expected to guide early visual-sensory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Relationships between Structural and Acoustic Properties of Maternal Talk and Children's Early Word Recognition

    Science.gov (United States)

    Suttora, Chiara; Salerni, Nicoletta; Zanchi, Paola; Zampini, Laura; Spinelli, Maria; Fasolo, Mirco

    2017-01-01

    This study aimed to investigate specific associations between structural and acoustic characteristics of infant-directed (ID) speech and word recognition. Thirty Italian-acquiring children and their mothers were tested when the children were 1;3. Children's word recognition was measured with the looking-while-listening task. Maternal ID speech was…

  6. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  7. Robotics control using isolated word recognition of voice input

    Science.gov (United States)

    Weiner, J. M.

    1977-01-01

    A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.

  8. Levels-of-processing effect on frontotemporal function in schizophrenia during word encoding and recognition.

    Science.gov (United States)

    Ragland, J Daniel; Gur, Ruben C; Valdez, Jeffrey N; Loughead, James; Elliott, Mark; Kohler, Christian; Kanes, Stephen; Siegel, Steven J; Moelter, Stephen T; Gur, Raquel E

    2005-10-01

    Patients with schizophrenia improve episodic memory accuracy when given organizational strategies through levels-of-processing paradigms. This study tested if improvement is accompanied by normalized frontotemporal function. Event-related blood-oxygen-level-dependent functional magnetic resonance imaging (fMRI) was used to measure activation during shallow (perceptual) and deep (semantic) word encoding and recognition in 14 patients with schizophrenia and 14 healthy comparison subjects. Despite slower and less accurate overall word classification, the patients showed normal levels-of-processing effects, with faster and more accurate recognition of deeply processed words. These effects were accompanied by left ventrolateral prefrontal activation during encoding in both groups, although the thalamus, hippocampus, and lingual gyrus were overactivated in the patients. During word recognition, the patients showed overactivation in the left frontal pole and had a less robust right prefrontal response. Evidence of normal levels-of-processing effects and left prefrontal activation suggests that patients with schizophrenia can form and maintain semantic representations when they are provided with organizational cues and can improve their word encoding and retrieval. Areas of overactivation suggest residual inefficiencies. Nevertheless, the effect of teaching organizational strategies on episodic memory and brain function is a worthwhile topic for future interventional studies.

  9. See Before You Jump: Full Recognition of Parafoveal Words Precedes Skips During Reading

    Science.gov (United States)

    Gordon, Peter C.; Plummer, Patrick; Choi, Wonil

    2013-01-01

    Serial attention models of eye-movement control during reading were evaluated in an eye-tracking experiment that examined how lexical activation combines with visual information in the parafovea to affect word skipping (where a word is not fixated during first-pass reading). Lexical activation was manipulated by repetition priming created through prime-target pairs embedded within a sentence. The boundary technique (Rayner, 1975) was used to determine whether the target word was fully available during parafoveal preview or whether it was available with transposed letters (e.g., Herman changed to Hreman). With full parafoveal preview, the target word was skipped more frequently when it matched the earlier prime word (i.e., was repeated) than when it did not match the earlier prime word (i.e., was new). With transposed-letter (TL) preview, repetition had no effect on skipping rates despite the great similarity of the TL preview string to the target word and substantial evidence that TL strings activate the words from which they are derived (Perea & Lupker, 2003). These results show that lexically-based skipping is based on full recognition of the letter string in parafoveal preview and does not involve using the contextual constraint to compensate for the reduced information available from the parafovea. These results are consistent with models of eye-movement control during reading in which successive words in a text are processed one at a time (serially) and in which word recognition strongly influences eye movements. PMID:22686842

  10. The Influence of Semantic Neighbours on Visual Word Recognition

    Science.gov (United States)

    Yates, Mark

    2012-01-01

    Although it is assumed that semantics is a critical component of visual word recognition, there is still much that we do not understand. One recent way of studying semantic processing has been in terms of semantic neighbourhood (SN) density, and this research has shown that semantic neighbours facilitate lexical decisions. However, it is not clear…

  11. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    Science.gov (United States)

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  12. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    Science.gov (United States)

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  14. Effects of lexical characteristics and demographic factors on mandarin chinese open-set word recognition in children with cochlear implants.

    Science.gov (United States)

    Liu, Haihong; Liu, Sha; Wang, Suju; Liu, Chang; Kong, Ying; Zhang, Ning; Li, Shujing; Yang, Yilin; Han, Demin; Zhang, Luo

    2013-01-01

    The purpose of this study was to examine the open-set word recognition performance of Mandarin Chinese-speaking children who had received a multichannel cochlear implant (CI) and examine the effects of lexical characteristics and demographic factors (i.e., age at implantation and duration of implant use) on Mandarin Chinese open-set word recognition in these children. Participants were 230 prelingually deafened children with CIs. Age at implantation ranged from 0.9 to 16.0 years, with a mean of 3.9 years. The Standard-Chinese version of the Monosyllabic Lexical Neighborhood test and the Multisyllabic Lexical Neighborhood test were used to evaluate the open-set word identification abilities of the children. A two-way analysis of variance was performed to delineate the lexical effects on the open-set word identification, with word difficulty and syllable length as the two main factors. The effects of age at implantation and duration of implant use on open-set, word-recognition performance were examined using correlational/regressional models. First, the average percent-correct scores for the disyllabic "easy" list, disyllabic "hard" list, monosyllabic "easy" list, and monosyllabic "hard" list were 65.0%, 51.3%, 58.9%, and 46.2%, respectively. For both the easy and hard lists, the percentage of words correctly identified was higher for disyllabic words than for monosyllabic words, Second, the CI group scored 26.3%, 31.3%, and 18.8 % points lower than their hearing-age-matched normal-hearing peers for 4, 5, and 6 years of hearing age, respectively. The corresponding gaps between the CI group and the chronological-age-matched normal-hearing group were 47.6, 49.6, and 42.4, respectively. The individual variations in performance were much greater in the CI group than in the normal-hearing group, Third, the children exhibited steady improvements in performance as the duration of implant use increased, especially 1 to 6 years postimplantation. Last, age at implantation had

  15. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  17. Recognition of Handwritten Arabic words using a neuro-fuzzy network

    International Nuclear Information System (INIS)

    Boukharouba, Abdelhak; Bennia, Abdelhak

    2008-01-01

    We present a new method for the recognition of handwritten Arabic words based on neuro-fuzzy hybrid network. As a first step, connected components (CCs) of black pixels are detected. Then the system determines which CCs are sub-words and which are stress marks. The stress marks are then isolated and identified separately and the sub-words are segmented into graphemes. Each grapheme is described by topological and statistical features. Fuzzy rules are extracted from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data using a fuzzy c-means, and rule parameter tuning phase using gradient descent learning. After learning, the network encodes in its topology the essential design parameters of a fuzzy inference system.The contribution of this technique is shown through the significant tests performed on a handwritten Arabic words database

  18. The Role of Morphology in Word Recognition of Hebrew as a Templatic Language

    Science.gov (United States)

    Oganyan, Marina

    2017-01-01

    Research on recognition of complex words has primarily focused on affixational complexity in concatenative languages. This dissertation investigates both templatic and affixational complexity in Hebrew, a templatic language, with particular focus on the role of the root and template morphemes in recognition. It also explores the role of morphology…

  19. Noticing the self: Implicit assessment of self-focused attention using word recognition latencies

    OpenAIRE

    Eichstaedt, Dr Jan; Silvia, Dr Paul J.

    2003-01-01

    Self-focused attention is difficult to measure. Two studies developed an implicit measure of self-focus based on word recognition latencies. Self-focused attention activates self-content, so self-focused people should recognize self-relevant words more quickly. Study 1 measured individual-differences in self-focused attention. People scoring high in private self-consciousness recognized self-relevant words more quickly. Study 2 manipulated objective self-awareness with a writing task. People ...

  20. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  1. Short-Term and Long-Term Effects on Visual Word Recognition

    Science.gov (United States)

    Protopapas, Athanassios; Kapnoula, Efthymia C.

    2016-01-01

    Effects of lexical and sublexical variables on visual word recognition are often treated as homogeneous across participants and stable over time. In this study, we examine the modulation of frequency, length, syllable and bigram frequency, orthographic neighborhood, and graphophonemic consistency effects by (a) individual differences, and (b) item…

  2. A Demonstration of Improved Precision of Word Recognition Scores

    Science.gov (United States)

    Schlauch, Robert S.; Anderson, Elizabeth S.; Micheyl, Christophe

    2014-01-01

    Purpose: The purpose of this study was to demonstrate improved precision of word recognition scores (WRSs) by increasing list length and analyzing phonemic errors. Method: Pure-tone thresholds (frequencies between 0.25 and 8.0 kHz) and WRSs were measured in 3 levels of speech-shaped noise (50, 52, and 54 dB HL) for 24 listeners with normal…

  3. Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language

    Science.gov (United States)

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2017-01-01

    The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…

  4. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  5. Investigating an Innovative Computer Application to Improve L2 Word Recognition from Speech

    Science.gov (United States)

    Matthews, Joshua; O'Toole, John Mitchell

    2015-01-01

    The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…

  6. Bedding down new words: Sleep promotes the emergence of lexical competition in visual word recognition.

    Science.gov (United States)

    Wang, Hua-Chen; Savage, Greg; Gaskell, M Gareth; Paulin, Tamara; Robidoux, Serje; Castles, Anne

    2017-08-01

    Lexical competition processes are widely viewed as the hallmark of visual word recognition, but little is known about the factors that promote their emergence. This study examined for the first time whether sleep may play a role in inducing these effects. A group of 27 participants learned novel written words, such as banara, at 8 am and were tested on their learning at 8 pm the same day (AM group), while 29 participants learned the words at 8 pm and were tested at 8 am the following day (PM group). Both groups were retested after 24 hours. Using a semantic categorization task, we showed that lexical competition effects, as indexed by slowed responses to existing neighbor words such as banana, emerged 12 h later in the PM group who had slept after learning but not in the AM group. After 24 h the competition effects were evident in both groups. These findings have important implications for theories of orthographic learning and broader neurobiological models of memory consolidation.

  7. Automatization and Orthographic Development in Second Language Visual Word Recognition

    Science.gov (United States)

    Kida, Shusaku

    2016-01-01

    The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…

  8. Psychometrically equivalent bisyllabic words for speech recognition threshold testing in Vietnamese.

    Science.gov (United States)

    Harris, Richard W; McPherson, David L; Hanson, Claire M; Eggett, Dennis L

    2017-08-01

    This study identified, digitally recorded, edited and evaluated 89 bisyllabic Vietnamese words with the goal of identifying homogeneous words that could be used to measure the speech recognition threshold (SRT) in native talkers of Vietnamese. Native male and female talker productions of 89 Vietnamese bisyllabic words were recorded, edited and then presented at intensities ranging from -10 to 20 dBHL. Logistic regression was used to identify the best words for measuring the SRT. Forty-eight words were selected and digitally edited to have 50% intelligibility at a level equal to the mean pure-tone average (PTA) for normally hearing participants (5.2 dBHL). Twenty normally hearing native Vietnamese participants listened to and repeated bisyllabic Vietnamese words at intensities ranging from -10 to 20 dBHL. A total of 48 male and female talker recordings of bisyllabic words with steep psychometric functions (>9.0%/dB) were chosen for the final bisyllabic SRT list. Only words homogeneous with respect to threshold audibility with steep psychometric function slopes were chosen for the final list. Digital recordings of bisyllabic Vietnamese words are now available for use in measuring the SRT for patients whose native language is Vietnamese.

  9. Context affects L1 but not L2 during bilingual word recognition: an MEG study.

    Science.gov (United States)

    Pellikka, Janne; Helenius, Päivi; Mäkelä, Jyrki P; Lehtonen, Minna

    2015-03-01

    How do bilinguals manage the activation levels of the two languages and prevent interference from the irrelevant language? Using magnetoencephalography, we studied the effect of context on the activation levels of languages by manipulating the composition of word lists (the probability of the languages) presented auditorily to late Finnish-English bilinguals. We first determined the upper limit time-window for semantic access, and then focused on the preceding responses during which the actual word recognition processes were assumedly ongoing. Between 300 and 500 ms in the temporal cortices (in the N400 m response) we found an asymmetric language switching effect: the responses to L1 Finnish words were affected by the presentation context unlike the responses to L2 English words. This finding suggests that the stronger language is suppressed in an L2 context, supporting models that allow auditory word recognition to be affected by contextual factors and the language system to be subject to inhibitory influence. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Early processing of orthographic language membership information in bilingual visual word recognition: Evidence from ERPs.

    Science.gov (United States)

    Hoversten, Liv J; Brothers, Trevor; Swaab, Tamara Y; Traxler, Matthew J

    2017-08-01

    For successful language comprehension, bilinguals often must exert top-down control to access and select lexical representations within a single language. These control processes may critically depend on identification of the language to which a word belongs, but it is currently unclear when different sources of such language membership information become available during word recognition. In the present study, we used event-related potentials to investigate the time course of influence of orthographic language membership cues. Using an oddball detection paradigm, we observed early neural effects of orthographic bias (Spanish vs. English orthography) that preceded effects of lexicality (word vs. pseudoword). This early orthographic pop-out effect was observed for both words and pseudowords, suggesting that this cue is available prior to full lexical access. We discuss the role of orthographic bias for models of bilingual word recognition and its potential role in the suppression of nontarget lexical information. Published by Elsevier Ltd.

  11. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    OpenAIRE

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2011-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were la...

  12. Tracking the emergence of the consonant bias in visual-word recognition: evidence with developing readers.

    Science.gov (United States)

    Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat

    2014-01-01

    Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.

  13. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    Science.gov (United States)

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  14. Bilingual Word Recognition in Deaf and Hearing Signers: Effects of Proficiency and Language Dominance on Cross-Language Activation

    Science.gov (United States)

    Morford, Jill P.; Kroll, Judith F.; Piñar, Pilar; Wilkinson, Erin

    2014-01-01

    Recent evidence demonstrates that American Sign Language (ASL) signs are active during print word recognition in deaf bilinguals who are highly proficient in both ASL and English. In the present study, we investigate whether signs are active during print word recognition in two groups of unbalanced bilinguals: deaf ASL-dominant and hearing…

  15. HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.

    Science.gov (United States)

    Bharath, A; Madhvanath, Sriganesh

    2012-04-01

    Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.

  16. Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

    Science.gov (United States)

    Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve

    The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by

  17. Is pupillary response a reliable index of word recognition? Evidence from a delayed lexical decision task.

    Science.gov (United States)

    Haro, Juan; Guasch, Marc; Vallès, Blanca; Ferré, Pilar

    2017-10-01

    Previous word recognition studies have shown that the pupillary response is sensitive to a word's frequency. However, such a pupillary effect may be due to the process of executing a response, instead of being an index of word processing. With the aim of exploring this possibility, we recorded the pupillary responses in two experiments involving a lexical decision task (LDT). In the first experiment, participants completed a standard LDT, whereas in the second they performed a delayed LDT. The delay in the response allowed us to compare pupil dilations with and without the response execution component. The results showed that pupillary response was modulated by word frequency in both the standard and the delayed LDT. This finding supports the reliability of using pupillometry for word recognition research. Importantly, our results also suggest that tasks that do not require a response during pupil recording lead to clearer and stronger effects.

  18. Children's Spoken Word Recognition and Contributions to Phonological Awareness and Nonword Repetition: A 1-Year Follow-Up

    Science.gov (United States)

    Metsala, Jamie L.; Stavrinos, Despina; Walley, Amanda C.

    2009-01-01

    This study examined effects of lexical factors on children's spoken word recognition across a 1-year time span, and contributions to phonological awareness and nonword repetition. Across the year, children identified words based on less input on a speech-gating task. For word repetition, older children improved for the most familiar words. There…

  19. Predicting word-recognition performance in noise by young listeners with normal hearing using acoustic, phonetic, and lexical variables.

    Science.gov (United States)

    McArdle, Rachel; Wilson, Richard H

    2008-06-01

    To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.

  20. Face recognition system and method using face pattern words and face pattern bytes

    Science.gov (United States)

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  1. Morphological awareness and early and advanced word recognition and spelling in Dutch

    NARCIS (Netherlands)

    Rispens, J.E.; McBride-Chang, C.; Reitsma, P.

    2008-01-01

    This study investigated the relations of three aspects of morphological awareness to word recognition and spelling skills of Dutch speaking children. Tasks of inflectional and derivational morphology and lexical compounding, as well as measures of phonological awareness, vocabulary and mathematics

  2. Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.

    Science.gov (United States)

    Shi, Lu-Feng

    2017-04-01

    Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly

  3. The development of the University of Jordan word recognition test.

    Science.gov (United States)

    Garadat, Soha N; Abdulbaqi, Khader J; Haj-Tas, Maisa A

    2017-06-01

    To develop and validate a digitally recorded speech test battery to assess speech perception in Jordanian Arabic-speaking adults. Selected stimuli were digitally recorded and were divided into four lists of 25 words each. Speech audiometry was completed for all listeners. Participants were divided into two equal groups of 30 listeners each with equal male to female ratio. The first group of participants completed speech reception thresholds (SRTs) and word recognition testing on each of the four lists using a fixed intensity. The second group of listeners was tested on each of the four lists at different intensity levels in order to obtain the performance-intensity function. Sixty normal-hearing listeners in the age range of 19-25 years. All participants were native speakers of Jordanian Arabic. Results revealed that there were no significant differences between SRTs and pure tone average. Additionally, there were no differences across lists at multiple intensity levels. In general, the current study was successful in producing recorded speech materials for Jordanian Arabic population. This suggests that the speech stimuli generated by this study are suitable for measuring speech recognition in Jordanian Arabic-speaking listeners.

  4. An automatic system for Turkish word recognition using Discrete Wavelet Neural Network based on adaptive entropy

    International Nuclear Information System (INIS)

    Avci, E.

    2007-01-01

    In this paper, an automatic system is presented for word recognition using real Turkish word signals. This paper especially deals with combination of the feature extraction and classification from real Turkish word signals. A Discrete Wavelet Neural Network (DWNN) model is used, which consists of two layers: discrete wavelet layer and multi-layer perceptron. The discrete wavelet layer is used for adaptive feature extraction in the time-frequency domain and is composed of Discrete Wavelet Transform (DWT) and wavelet entropy. The multi-layer perceptron used for classification is a feed-forward neural network. The performance of the used system is evaluated by using noisy Turkish word signals. Test results showing the effectiveness of the proposed automatic system are presented in this paper. The rate of correct recognition is about 92.5% for the sample speech signals. (author)

  5. Memory bias for negative emotional words in recognition memory is driven by effects of category membership.

    Science.gov (United States)

    White, Corey N; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M; Ratcliff, Roger

    2014-01-01

    Recognition memory studies often find that emotional items are more likely than neutral items to be labelled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorised words were presented in the lists. Similar, though weaker, effects were observed with categorised words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership.

  6. Validating Models of Clinical Word Recognition Tests for Spanish/English Bilinguals

    Science.gov (United States)

    Shi, Lu-Feng

    2014-01-01

    Purpose: Shi and Sánchez (2010) developed models to predict the optimal test language for evaluating Spanish/English (S/E) bilinguals' word recognition. The current study intended to validate their conclusions in a separate bilingual listener sample. Method: Seventy normal-hearing S/E bilinguals varying in language profile were included.…

  7. Prediction of Word Recognition in the First Half of Grade 1

    Science.gov (United States)

    Snel, M. J.; Aarnoutse, C. A. J.; Terwel, J.; van Leeuwe, J. F. J.; van der Veld, W. M.

    2016-01-01

    Early detection of reading problems is important to prevent an enduring lag in reading skills. We studied the relationship between speed of word recognition (after six months of grade 1 education) and four kindergarten pre-literacy skills: letter knowledge, phonological awareness and naming speed for both digits and letters. Our sample consisted…

  8. From perception to metacognition: Auditory and olfactory functions in early blind, late blind, and sighted individuals

    Directory of Open Access Journals (Sweden)

    Stina Cornell Kärnekull

    2016-09-01

    Full Text Available Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15, late blind (n = 15, and sighted (n = 30 participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity.

  9. The role of syllabic structure in French visual word recognition.

    Science.gov (United States)

    Rouibah, A; Taft, M

    2001-03-01

    Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.

  10. Event-related potentials and recognition memory for pictures and words: the effects of intentional and incidental learning.

    Science.gov (United States)

    Noldy, N E; Stelmack, R M; Campbell, K B

    1990-07-01

    Event-related potentials were recorded under conditions of intentional or incidental learning of pictures and words, and during the subsequent recognition memory test for these stimuli. Intentionally learned pictures were remembered better than incidentally learned pictures and intentionally learned words, which, in turn, were remembered better than incidentally learned words. In comparison to pictures that were ignored, the pictures that were attended were characterized by greater positive amplitude frontally at 250 ms and centro-parietally at 350 ms and by greater negativity at 450 ms at parietal and occipital sites. There were no effects of attention on the waveforms elicited by words. These results support the view that processing becomes automatic for words, whereas the processing of pictures involves additional effort or allocation of attentional resources. The N450 amplitude was greater for words than for pictures during both acquisition (intentional items) and recognition phases (hit and correct rejection categories for intentional items, hit category for incidental items). Because pictures are better remembered than words, the greater late positive wave (600 ms) elicited by the pictures than the words during the acquisition phase is also consistent with the association between P300 and better memory that has been reported.

  11. The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words

    Science.gov (United States)

    Xu, Joe; Taft, Marcus

    2015-01-01

    A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…

  12. Visual information constrains early and late stages of spoken-word recognition in sentence context.

    Science.gov (United States)

    Brunellière, Angèle; Sánchez-García, Carolina; Ikumi, Nara; Soto-Faraco, Salvador

    2013-07-01

    Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Lexical-Semantic Processing and Reading: Relations between Semantic Priming, Visual Word Recognition and Reading Comprehension

    Science.gov (United States)

    Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli

    2016-01-01

    The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…

  14. The Influence of Semantic Constraints on Bilingual Word Recognition during Sentence Reading

    Science.gov (United States)

    Van Assche, Eva; Drieghe, Denis; Duyck, Wouter; Welvaert, Marijke; Hartsuiker, Robert J.

    2011-01-01

    The present study investigates how semantic constraint of a sentence context modulates language-non-selective activation in bilingual visual word recognition. We recorded Dutch-English bilinguals' eye movements while they read cognates and controls in low and high semantically constraining sentences in their second language. Early and late…

  15. A spatially-supported forced-choice recognition test reveals children’s long-term memory for newly learned word forms

    Directory of Open Access Journals (Sweden)

    Katherine R. Gordon

    2014-03-01

    Full Text Available Children’s memories for the link between a newly trained word and its referent have been the focus of extensive past research. However, memory for the word form itself is rarely assessed among preschool-age children. When it is, children are typically asked to verbally recall the forms, and they generally perform at floor on such tests. To better measure children’s memory for word forms, we aimed to design a more sensitive test that required recognition rather than recall, provided spatial cues to off-set the phonological memory demands of the test, and allowed pointing rather than verbal responses. We taught 12 novel word-referent pairs via ostensive naming to sixteen 4-to-6-year-olds and measured their memory for the word forms after a week-long retention interval using the new spatially-supported form recognition test. We also measured their memory for the word-referent links and the generalization of the links to untrained referents with commonly used recognition tests. Children demonstrated memory for word forms at above chance levels; however, their memory for forms was poorer than their memory for trained or generalized word-referent links. When in error, children were no more likely to select a foil that was a close neighbor to the target form than a maximally different foil. Additionally, they more often selected correct forms that were among the first six than the last six to be trained. Overall, these findings suggest that children are able to remember word forms after a limited number of ostensive exposures and a long-term delay. However, word forms remain more difficult to learn than word-referent links and there is an upper limit on the number of forms that can be learned within a given period of time.

  16. The Activation of Embedded Words in Spoken Word Recognition.

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster ) indexed activation of the embedded words (e.g., ham ). When the listening conditions were optimal, isolated embedded words (e.g., ham ) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster ), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions.

  17. Testing Measurement Invariance across Groups of Children with and without Attention-Deficit/ Hyperactivity Disorder: Applications for Word Recognition and Spelling Tasks.

    Science.gov (United States)

    Lúcio, Patrícia S; Salum, Giovanni; Swardfager, Walter; Mari, Jair de Jesus; Pan, Pedro M; Bressan, Rodrigo A; Gadelha, Ary; Rohde, Luis A; Cogo-Moreira, Hugo

    2017-01-01

    Although studies have consistently demonstrated that children with attention-deficit/hyperactivity disorder (ADHD) perform significantly lower than controls on word recognition and spelling tests, such studies rely on the assumption that those groups are comparable in these measures. This study investigates comparability of word recognition and spelling tests based on diagnostic status for ADHD through measurement invariance methods. The participants ( n = 1,935; 47% female; 11% ADHD) were children aged 6-15 with normal IQ (≥70). Measurement invariance was investigated through Confirmatory Factor Analysis and Multiple Indicators Multiple Causes models. Measurement invariance was attested in both methods, demonstrating the direct comparability of the groups. Children with ADHD were 0.51 SD lower in word recognition and 0.33 SD lower in spelling tests than controls. Results suggest that differences in performance on word recognition and spelling tests are related to true mean differences based on ADHD diagnostic status. Implications for clinical practice and research are discussed.

  18. The Activation of Embedded Words in Spoken Word Recognition

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G.

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407

  19. Neural Correlates of Word Recognition: A Systematic Comparison of Natural Reading and Rapid Serial Visual Presentation.

    Science.gov (United States)

    Kornrumpf, Benthe; Niefind, Florian; Sommer, Werner; Dimigen, Olaf

    2016-09-01

    Neural correlates of word recognition are commonly studied with (rapid) serial visual presentation (RSVP), a condition that eliminates three fundamental properties of natural reading: parafoveal preprocessing, saccade execution, and the fast changes in attentional processing load occurring from fixation to fixation. We combined eye-tracking and EEG to systematically investigate the impact of all three factors on brain-electric activity during reading. Participants read lists of words either actively with eye movements (eliciting fixation-related potentials) or maintained fixation while the text moved passively through foveal vision at a matched pace (RSVP-with-flankers paradigm, eliciting ERPs). The preview of the upcoming word was manipulated by changing the number of parafoveally visible letters. Processing load was varied by presenting words of varying lexical frequency. We found that all three factors have strong interactive effects on the brain's responses to words: Once a word was fixated, occipitotemporal N1 amplitude decreased monotonically with the amount of parafoveal information available during the preceding fixation; hence, the N1 component was markedly attenuated under reading conditions with preview. Importantly, this preview effect was substantially larger during active reading (with saccades) than during passive RSVP with flankers, suggesting that the execution of eye movements facilitates word recognition by increasing parafoveal preprocessing. Lastly, we found that the N1 component elicited by a word also reflects the lexical processing load imposed by the previously inspected word. Together, these results demonstrate that, under more natural conditions, words are recognized in a spatiotemporally distributed and interdependent manner across multiple eye fixations, a process that is mediated by active motor behavior.

  20. Phonological Contribution during Visual Word Recognition in Child Readers. An Intermodal Priming Study in Grades 3 and 5

    Science.gov (United States)

    Sauval, Karinne; Casalis, Séverine; Perre, Laetitia

    2017-01-01

    This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…

  1. Acquisition of Malay Word Recognition Skills: Lessons from Low-Progress Early Readers

    Science.gov (United States)

    Lee, Lay Wah; Wheldall, Kevin

    2011-01-01

    Malay is a consistent alphabetic orthography with complex syllable structures. The focus of this research was to investigate word recognition performance in order to inform reading interventions for low-progress early readers. Forty-six Grade 1 students were sampled and 11 were identified as low-progress readers. The results indicated that both…

  2. Word Recognition during Reading: The Interaction between Lexical Repetition and Frequency

    Science.gov (United States)

    Lowder, Matthew W.; Choi, Wonil; Gordon, Peter C.

    2013-01-01

    Memory studies utilizing long-term repetition priming have generally demonstrated that priming is greater for low-frequency words than for high-frequency words and that this effect persists if words intervene between the prime and the target. In contrast, word-recognition studies utilizing masked short-term repetition priming typically show that the magnitude of repetition priming does not differ as a function of word frequency and does not persist across intervening words. We conducted an eye-tracking while reading experiment to determine which of these patterns more closely resembles the relationship between frequency and repetition during the natural reading of a text. Frequency was manipulated using proper names that were high-frequency (e.g., Stephen) or low-frequency (e.g., Dominic). The critical name was later repeated in the sentence, or a new name was introduced. First-pass reading times and skipping rates on the critical name revealed robust repetition-by-frequency interactions such that the magnitude of the repetition-priming effect was greater for low-frequency names than for high-frequency names. In contrast, measures of later processing showed effects of repetition that did not depend on lexical frequency. These results are interpreted within a framework that conceptualizes eye-movement control as being influenced in different ways by lexical- and discourse-level factors. PMID:23283808

  3. Effect of Concentrated Language Encounter Method in Developing ...

    African Journals Online (AJOL)

    The paper examined the effect of concentrated language encounter method in developing sight word recognition skill in primary school pupils in cross river state. The purpose of the study was to find out the effect of Primary One pupils' reading level, English sight word recognition skill. It also examine the extent to which the ...

  4. Differences in Word Recognition between Early Bilinguals and Monolinguals: Behavioral and ERP Evidence

    Science.gov (United States)

    Lehtonen, Minna; Hulten, Annika; Rodriguez-Fornells, Antoni; Cunillera, Toni; Tuomainen, Jyrki; Laine, Matti

    2012-01-01

    We investigated the behavioral and brain responses (ERPs) of bilingual word recognition to three fundamental psycholinguistic factors, frequency, morphology, and lexicality, in early bilinguals vs. monolinguals. Earlier behavioral studies have reported larger frequency effects in bilinguals' nondominant vs. dominant language and in some studies…

  5. ASL Handshape Stories, Word Recognition and Signing Deaf Readers: An Exploratory Study

    Science.gov (United States)

    Gietz, Merrilee R.

    2013-01-01

    The effectiveness of using American Sign Language (ASL) handshape stories to teach word recognition in whole stories using a descriptive case study approach was explored. Four profoundly deaf children ages 7 to 8, enrolled in a self-contained deaf education classroom in a public school in the south participated in the story time five-week…

  6. Does Set for Variability Mediate the Influence of Vocabulary Knowledge on the Development of Word Recognition Skills?

    Science.gov (United States)

    Tunmer, William E.; Chapman, James W.

    2012-01-01

    This study investigated the hypothesis that vocabulary influences word recognition skills indirectly through "set for variability", the ability to determine the correct pronunciation of approximations to spoken English words. One hundred forty children participating in a 3-year longitudinal study were administered reading and…

  7. Maturational changes in ear advantage for monaural word recognition in noise among listeners with central auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Mohsin Ahmed Shaikh

    2017-02-01

    Full Text Available This study aimed to investigate differences between ears in performance on a monaural word recognition in noise test among individuals across a broad range of ages assessed for (CAPD. Word recognition scores in quiet and in speech noise were collected retrospectively from the medical files of 107 individuals between the ages of 7 and 30 years who were diagnosed with (CAPD. No ear advantage was found on the word recognition in noise task in groups less than ten years. Performance in both ears was equally poor. Right ear performance improved across age groups, with scores of individuals above age 10 years falling within the normal range. In contrast, left ear performance remained essentially stable and in the impaired range across all age groups. Findings indicate poor left hemispheric dominance for speech perception in noise in children below the age of 10 years with (CAPD. However, a right ear advantage on this monaural speech in noise task was observed for individuals 10 years and older.

  8. The relationship between recognition memory for emotion-laden words and white matter microstructure in normal older individuals.

    Science.gov (United States)

    Saarela, Carina; Karrasch, Mira; Ilvesmäki, Tero; Parkkola, Riitta; Rinne, Juha O; Laine, Matti

    2016-12-14

    Functional neuroimaging studies have shown age-related differences in brain activation and connectivity patterns for emotional memory. Previous studies with middle-aged and older adults have reported associations between episodic memory and white matter (WM) microstructure obtained from diffusion tensor imaging, but such studies on emotional memory remain few. To our knowledge, this is the first study to explore associations between WM microstructure as measured by fractional anisotropy (FA) and recognition memory for intentionally encoded positive, negative, and emotionally neutral words using tract-based spatial statistics applied to diffusion tensor imaging images in an elderly sample (44 cognitively intact adults aged 50-79 years). The use of tract-based spatial statistics enables the identification of WM tracts important to emotional memory without a priori assumptions required for region-of-interest approaches that have been used in previous work. The behavioral analyses showed a positivity bias, that is, a preference for positive words, in recognition memory. No statistically significant associations emerged between FA and memory for negative or neutral words. Controlling for age and memory performance for negative and neutral words, recognition memory for positive words was negatively associated with FA in several projection, association, and commissural tracts in the left hemisphere. This likely reflects the complex interplay between the mnemonic positivity bias, structural WM integrity, and functional brain compensatory mechanisms in older age. Also, the unexpected directionality of the results indicates that the WM microstructural correlates of emotional memory show unique characteristics in normal older individuals.

  9. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    Science.gov (United States)

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  10. Word add-in for ontology recognition: semantic enrichment of scientific literature

    Directory of Open Access Journals (Sweden)

    Naim Oscar

    2010-02-01

    Full Text Available Abstract Background In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. Results The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. Conclusions The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata.

  11. Word add-in for ontology recognition: semantic enrichment of scientific literature.

    Science.gov (United States)

    Fink, J Lynn; Fernicola, Pablo; Chandran, Rahul; Parastatidis, Savas; Wade, Alex; Naim, Oscar; Quinn, Gregory B; Bourne, Philip E

    2010-02-24

    In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata.

  12. Do good and poor readers make use of morphemic structure in English word recognition?

    Directory of Open Access Journals (Sweden)

    Lynne G. Duncan

    2011-06-01

    Full Text Available The links between oral morphological awareness and the use of derivational morphology are examined in the English word recognition of 8-year-old good and poor readers. Morphological awareness was assessed by a sentence completion task. The role of morphological structure in lexical access was examined by manipulating the presence of embedded words and suffixes in items presented for lexical decision. Good readers were more accurate in the morphological awareness task but did not show facilitation for real derivations even though morpho-semantic information appeared to inform their lexical decisions. The poor readers, who were less accurate, displayed a strong lexicality effect in lexical decision and the presence of an embedded word led to facilitation for words and inhibition for pseudo-words. Overall, the results suggest that both good and poor readers of English are sensitive to the internal structure of written words, with the better readers showing most evidence of morphological analysis.

  13. Neighborhood Frequency Effect in Chinese Word Recognition: Evidence from Naming and Lexical Decision

    Science.gov (United States)

    Li, Meng-Feng; Gao, Xin-Yu; Chou, Tai-Li; Wu, Jei-Tun

    2017-01-01

    Neighborhood frequency is a crucial variable to know the nature of word recognition. Different from alphabetic scripts, neighborhood frequency in Chinese is usually confounded by component character frequency and neighborhood size. Three experiments were designed to explore the role of the neighborhood frequency effect in Chinese and the stimuli…

  14. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    Science.gov (United States)

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  15. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  16. The Effective Use of Symbols in Teaching Word Recognition to Children with Severe Learning Difficulties: A Comparison of Word Alone, Integrated Picture Cueing and the Handle Technique.

    Science.gov (United States)

    Sheehy, Kieron

    2002-01-01

    A comparison is made between a new technique (the Handle Technique), Integrated Picture Cueing, and a Word Alone Method. Results show using a new combination of teaching strategies enabled logographic symbols to be used effectively in teaching word recognition to 12 children with severe learning difficulties. (Contains references.) (Author/CR)

  17. Finding words in a language that allows words without vowels.

    Science.gov (United States)

    El Aissati, Abder; McQueen, James M; Cutler, Anne

    2012-07-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the constraint would be counter-productive in certain languages that allow stand-alone vowelless open-class words. One such language is Berber (where t is indeed a word). Berber listeners here detected words affixed to nonsense contexts with or without vowels. Length effects seen in other languages replicated in Berber, but in contrast to prior findings, word detection was not hindered by vowelless contexts. When words can be vowelless, otherwise universal constraints disfavoring vowelless words do not feature in spoken-word recognition. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Many Neighbors are not Silent. fMRI Evidence for Global Lexical Activity in Visual Word Recognition.

    Directory of Open Access Journals (Sweden)

    Mario eBraun

    2015-07-01

    Full Text Available Many neurocognitive studies investigated the neural correlates of visual word recognition, some of which manipulated the orthographic neighborhood density of words and nonwords believed to influence the activation of orthographically similar representations in a hypothetical mental lexicon. Previous neuroimaging research failed to find evidence for such global lexical activity associated with neighborhood density. Rather, effects were interpreted to reflect semantic or domain general processing. The present fMRI study revealed effects of lexicality, orthographic neighborhood density and a lexicality by orthographic neighborhood density interaction in a silent reading task. For the first time we found greater activity for words and nonwords with a high number of neighbors. We propose that this activity in the dorsomedial prefrontal cortex reflects activation of orthographically similar codes in verbal working memory thus providing evidence for global lexical activity as the basis of the neighborhood density effect. The interaction of lexicality by neighborhood density in the ventromedial prefrontal cortex showed lower activity in response to words with a high number compared to nonwords with a high number of neighbors. In the light of these results the facilitatory effect for words and inhibitory effect for nonwords with many neighbors observed in previous studies can be understood as being due to the operation of a fast-guess mechanism for words and a temporal deadline mechanism for nonwords as predicted by models of visual word recognition. Furthermore, we propose that the lexicality effect with higher activity for words compared to nonwords in inferior parietal and middle temporal cortex reflects the operation of an identification mechanism and based on local lexico-semantic activity.

  19. Modulation of brain activity by multiple lexical and word form variables in visual word recognition: A parametric fMRI study.

    Science.gov (United States)

    Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann

    2008-09-01

    Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.

  20. Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach

    OpenAIRE

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy

    2012-01-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual varia...

  1. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    Science.gov (United States)

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  2. Preschoolers Explore Interactive Storybook Apps: The Effect on Word Recognition and Story Comprehension

    Science.gov (United States)

    Zipke, Marcy

    2017-01-01

    Two experiments explored the effects of reading digital storybooks on tablet computers with 25 preschoolers, aged 4-5. In the first experiment, the students' word recognition scores were found to increase significantly more when students explored a digital storybook and employed the read-aloud function than when they were read to from a comparable…

  3. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    Science.gov (United States)

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We

  4. The role of tone and segmental information in visual-word recognition in Thai.

    Science.gov (United States)

    Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira

    2017-07-01

    Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.

  5. Artificial Sight Basic Research, Biomedical Engineering, and Clinical Advances

    CERN Document Server

    Humayun, Mark S; Chader, Gerald; Greenbaum, Elias

    2008-01-01

    Artificial sight is a frontier area of modern ophthalmology combining the multidisciplinary skills of surgical ophthalmology, biomedical engineering, biological physics, and psychophysical testing. Many scientific, engineering, and surgical challenges must be surmounted before widespread practical applications can be realized. The goal of Artificial Sight is to summarize the state-of-the-art research in this exciting area, and to describe some of the current approaches and initiatives that may help patients in a clinical setting. The Editors are active researchers in the fields of artificial sight, biomedical engineering and biological physics. They have received numerous professional awards and recognition for their work. The artificial sight team at the Doheny Eye Institute, led by Dr. Mark Humayun, is a world leader in this area of biomedical engineering and clinical research. Key Features Introduces and assesses the state of the art for a broad audience of biomedical engineers, biophysicists, and clinical...

  6. Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

    DEFF Research Database (Denmark)

    Kulkarni, Rucha; Jain, Kritika; Bansal, Himanshu

    2013-01-01

    and composition of the two modalities was used for integration. F-measure for Eye-Gaze and Word Accuracy for ASR were used as metrics to evaluate our results. In reading task, we demonstrated a significant improvement in both Eye-Gaze f-measure and speech Word Accuracy. In sight translation task, significant...

  7. Talker and background noise specificity in spoken word recognition memory

    Directory of Open Access Journals (Sweden)

    Angela Cooper

    2017-11-01

    Full Text Available Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition and background noise (noise condition using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.

  8. Word/sub-word lattices decomposition and combination for speech recognition

    OpenAIRE

    Le , Viet-Bac; Seng , Sopheap; Besacier , Laurent; Bigi , Brigitte

    2008-01-01

    International audience; This paper presents the benefit of using multiple lexical units in the post-processing stage of an ASR system. Since the use of sub-word units can reduce the high out-of-vocabulary rate and improve the lack of text resources in statistical language modeling, we propose several methods to decompose, normalize and combine word and sub-word lattices generated from different ASR systems. By using a sub-word information table, every word in a lattice can be decomposed into ...

  9. How does interhemispheric communication in visual word recognition work? Deciding between early and late integration accounts of the split fovea theory.

    Science.gov (United States)

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J

    2009-02-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.

  10. Acute Alcohol Effects on Repetition Priming and Word Recognition Memory with Equivalent Memory Cues

    Science.gov (United States)

    Ray, Suchismita; Bates, Marsha E.

    2006-01-01

    Acute alcohol intoxication effects on memory were examined using a recollection-based word recognition memory task and a repetition priming task of memory for the same information without explicit reference to the study context. Memory cues were equivalent across tasks; encoding was manipulated by varying the frequency of occurrence (FOC) of words…

  11. The Effect of Lexical Frequency on Spoken Word Recognition in Young and Older Listeners

    Science.gov (United States)

    Revill, Kathleen Pirog; Spieler, Daniel H.

    2011-01-01

    When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults’ eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age. PMID:21707175

  12. The Developmental Lexicon Project: A behavioral database to investigate visual word recognition across the lifespan.

    Science.gov (United States)

    Schröter, Pauline; Schroeder, Sascha

    2017-12-01

    With the Developmental Lexicon Project (DeveL), we present a large-scale study that was conducted to collect data on visual word recognition in German across the lifespan. A total of 800 children from Grades 1 to 6, as well as two groups of younger and older adults, participated in the study and completed a lexical decision and a naming task. We provide a database for 1,152 German words, comprising behavioral data from seven different stages of reading development, along with sublexical and lexical characteristics for all stimuli. The present article describes our motivation for this project, explains the methods we used to collect the data, and reports analyses on the reliability of our results. In addition, we explored developmental changes in three marker effects in psycholinguistic research: word length, word frequency, and orthographic similarity. The database is available online.

  13. Active learning for ontological event extraction incorporating named entity recognition and unknown word handling.

    Science.gov (United States)

    Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong

    2016-01-01

    Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method

  14. Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode

    Science.gov (United States)

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976

  15. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    Directory of Open Access Journals (Sweden)

    Laura Barca

    Full Text Available Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  16. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    Science.gov (United States)

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  17. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  18. False recognition depends on depth of prior word processing: a magnetoencephalographic (MEG) study.

    Science.gov (United States)

    Walla, P; Hufnagl, B; Lindinger, G; Deecke, L; Imhof, H; Lang, W

    2001-04-01

    Brain activity was measured with a whole head magnetoencephalograph (MEG) during the test phases of word recognition experiments. Healthy young subjects had to discriminate between previously presented and new words. During prior study phases two different levels of word processing were provided according to two different kinds of instructions (shallow and deep encoding). Event-related fields (ERFs) associated with falsely recognized words (false alarms) were found to depend on the depth of processing during the prior study phase. False alarms elicited higher brain activity (as reflected by dipole strength) in case of prior deep encoding as compared to shallow encoding between 300 and 500 ms after stimulus onset at temporal brain areas. Between 500 and 700 ms we found evidence for differences in the involvement of neural structures related to both conditions of false alarms. Furthermore, the number of false alarms was found to depend on depth of processing. Shallow encoding led to a higher number of false alarms than deep encoding. All data are discussed as strong support for the ideas that a certain level of word processing is performed by a distinct set of neural systems and that the same neural systems which encode information are reactivated during the retrieval.

  19. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Miniaturized day/night sight in Soldato Futuro program

    Science.gov (United States)

    Landini, Alberto; Cocchi, Alessandro; Bardazzi, Riccardo; Sardelli, Mauro; Puntri, Stefano

    2013-06-01

    The market of the sights for the 5.56 mm assault rifles is dominated by mainly three types of systems: TWS (Thermal Weapon Sight), the Pocket Scope with Weapon Mount and the Clip-on. The latter are designed primarily for special forces and snipers use, while the TWS design is triggered mainly by the DRI (Detection, Recognition, Identification) requirements. The Pocket Scope design is focused on respecting the SWaP (Size, Weight and Power dissipation) requirements. Compared to the TWS systems, for the last two years there was a significant technological growth of the Pocket Scope/Weapon Mount solutions, concentrated on the compression of the overall dimensions. The trend for the assault rifles is the use of small size/light weight (SWaP) IR sights, suitable mainly for close combat operations but also for extraordinary use as pocket scopes - handheld or helmet mounted. The latest developments made by Selex ES S.p.A. are responding precisely to the above-mentioned trend, through a miniaturized Day/Night sight embedding state-of-the art sensors and using standard protocols (USB 2.0, Bluetooth 4.0) for interfacing with PDAs, Wearable computers, etc., while maintaining the "shoot around the corner" capability. Indeed, inside the miniaturized Day/Night sight architecture, a wireless link using Bluetooth technology has been implemented to transmit the video streaming of the rifle sight to an helmet mounted display. The video of the rifle sight is transmitted only to the eye-piece of the soldier shouldering the rifle.

  1. Putting It All Together: A Unified Account of Word Recognition and Reaction-Time Distributions

    Science.gov (United States)

    Norris, Dennis

    2009-01-01

    R. Ratcliff, P. Gomez, and G. McKoon (2004) suggested much of what goes on in lexical decision is attributable to decision processes and may not be particularly informative about word recognition. They proposed that lexical decision should be characterized by a decision process, taking the form of a drift-diffusion model (R. Ratcliff, 1978), that…

  2. Finding words in a language that allows words without vowels

    NARCIS (Netherlands)

    El Aissati, A.; McQueen, J.M.; Cutler, A.

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the

  3. Serial and parallel processing in reading: investigating the effects of parafoveal orthographic information on nonisolated word recognition.

    Science.gov (United States)

    Dare, Natasha; Shillcock, Richard

    2013-01-01

    We present a novel lexical decision task and three boundary paradigm eye-tracking experiments that clarify the picture of parallel processing in word recognition in context. First, we show that lexical decision is facilitated by associated letter information to the left and right of the word, with no apparent hemispheric specificity. Second, we show that parafoveal preview of a repeat of word n at word n + 1 facilitates reading of word n relative to a control condition with an unrelated word at word n + 1. Third, using a version of the boundary paradigm that allowed for a regressive eye movement, we show no parafoveal "postview" effect on reading word n of repeating word n at word n - 1. Fourth, we repeat the second experiment but compare the effects of parafoveal previews consisting of a repeated word n with a transposed central bigram (e.g., caot for coat) and a substituted central bigram (e.g., ceit for coat), showing the latter to have a deleterious effect on processing word n, thereby demonstrating that the parafoveal preview effect is at least orthographic and not purely visual.

  4. The effect of prosody teaching on developing word recognition skills for interpreter trainees. An experimental study

    NARCIS (Netherlands)

    Yenkimaleki, M.; V.J., van Heuven

    2016-01-01

    The present study investigates the effect of the explicit teaching of prosodic features on developing word recognition skills with interpreter trainees. Two groups of student interpreters were composed. All were native speakers of Farsi who studied English translation and interpreting at the BA

  5. The effect of prosody teaching on developing word recognition skills for interpreter trainees : An experimental study

    NARCIS (Netherlands)

    Yenkimaleki, M.; V.J., van Heuven

    2016-01-01

    The present study investigates the effect of the explicit teaching of prosodic features on developing word recognition skills with interpreter trainees. Two groups of student interpreters were composed. All were native speakers of Farsi who studied English translation and interpreting at the BA

  6. The Impact of Orthographic Connectivity on Visual Word Recognition in Arabic: A Cross-Sectional Study

    Science.gov (United States)

    Khateb, Asaid; Khateb-Abdelgani, Manal; Taha, Haitham Y.; Ibrahim, Raphiq

    2014-01-01

    This study aimed at assessing the effects of letters' connectivity in Arabic on visual word recognition. For this purpose, reaction times (RTs) and accuracy scores were collected from ninety-third, sixth and ninth grade native Arabic speakers during a lexical decision task, using fully connected (Cw), partially connected (PCw) and…

  7. Novel grid-based optical Braille conversion: from scanning to wording

    Science.gov (United States)

    Yoosefi Babadi, Majid; Jafari, Shahram

    2011-12-01

    Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.

  8. Predictive coding accelerates word recognition and learning in the early stages of language development.

    Science.gov (United States)

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-11-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  9. Effect of an unrelated fluent action on word recognition: A case of motor discrepancy.

    Science.gov (United States)

    Brouillet, Denis; Milhau, Audrey; Brouillet, Thibaut; Servajean, Philippe

    2017-06-01

    It is now well established that motor fluency affects cognitive processes, including memory. In two experiments participants learned a list of words and then performed a recognition task. The original feature of our procedure is that before judging the words they had to perform a fluent gesture (i.e., typing a letter dyad). The dyads comprised letters located on either the right or left side of the keyboard. Participants typed dyads with their right or left index finger; the required movement was either very small (dyad composed of adjacent letters, Experiment 1) or slightly larger (dyad composed of letters separated by one key, experiment 2). The results show that when the gesture was performed in the ipsilateral space the probability of recognizing a word increased (to a lesser extent it is the same with the dominant hand, experiment 2). Moreover, a binary regression logistic highlighted that the probability of recognizing a word was proportional to the speed by which the gesture was performed. These results are discussed in terms of a feeling of familiarity emerging from motor discrepancy.

  10. The control of working memory resources in intentional forgetting: evidence from incidental probe word recognition.

    Science.gov (United States)

    Fawcett, Jonathan M; Taylor, Tracy L

    2012-01-01

    We combined an item-method directed forgetting paradigm with a secondary task requiring a response to discriminate the color of probe words presented 1400 ms, 1800 ms or 2600 ms following each study phase memory instruction. The speed to make the color discrimination was used to assess the cognitive demands associated with instantiating Remember (R) and Forget (F) instructions; incidental memory for probe words was used to assess whether instantiating an F instruction also affects items presented in close temporal proximity. Discrimination responses were slower following F than R instructions at the two longest intervals. Critically, at the 1800 ms interval, incidental probe word recognition was worse following F than R instructions, particularly when the study word was successfully forgotten (as opposed to unintentionally remembered). We suggest that intentional forgetting is an active cognitive process associated with establishing control over the contents of working memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Comparison of the Effects of SMART Board Technology and Flash Card Instruction on Sight Word Recognition and Observational Learning

    Science.gov (United States)

    Mechling, Linda C.; Gast, David L.; Thompson, Kimberly L.

    2009-01-01

    This study compared the effectiveness of SMART Board, interactive whiteboard technology and traditional flash cards in teaching reading in a small-group instructional arrangement. Three students with moderate intellectual disabilities were taught to read grocery store aisle marker words under each condition. Observational learning (students…

  12. Evaluating the developmental trajectory of the episodic buffer component of working memory and its relation to word recognition in children.

    Science.gov (United States)

    Wang, Shinmin; Allen, Richard J; Lee, Jun Ren; Hsieh, Chia-En

    2015-05-01

    The creation of temporary bound representation of information from different sources is one of the key abilities attributed to the episodic buffer component of working memory. Whereas the role of working memory in word learning has received substantial attention, very little is known about the link between the development of word recognition skills and the ability to bind information in the episodic buffer of working memory and how it may develop with age. This study examined the performance of Grade 2 children (8 years old), Grade 3 children (9 years old), and young adults on a task designed to measure their ability to bind visual and auditory-verbal information in working memory. Children's performance on this task significantly correlated with their word recognition skills even when chronological age, memory for individual elements, and other possible reading-related factors were taken into account. In addition, clear developmental trajectories were observed, with improvements in the ability to hold temporary bound information in working memory between Grades 2 and 3, and between the child and adult groups, that were independent from memory for the individual elements. These findings suggest that the capacity to temporarily bind novel auditory-verbal information to visual form in working memory is linked to the development of word recognition in children and improves with age. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Functional magnetic resonance imaging correlates of emotional word encoding and recognition in depression and anxiety disorders.

    Science.gov (United States)

    van Tol, Marie-José; Demenescu, Liliana R; van der Wee, Nic J A; Kortekaas, Rudie; Marjan M A, Nielen; Boer, J A Den; Renken, Remco J; van Buchem, Mark A; Zitman, Frans G; Aleman, André; Veltman, Dick J

    2012-04-01

    Major depressive disorder (MDD), panic disorder, and social anxiety disorder are among the most prevalent and frequently co-occurring psychiatric disorders in adults and may be characterized by a common deficiency in processing of emotional information. We used functional magnetic resonance imaging during the performance of an emotional word encoding and recognition paradigm in patients with MDD (n = 51), comorbid MDD and anxiety (n = 59), panic disorder and/or social anxiety disorder without comorbid MDD (n = 56), and control subjects (n = 49). In addition, we studied effects of illness severity, regional brain volume, and antidepressant use. Patients with MDD, prevalent anxiety disorders, or both showed a common hyporesponse in the right hippocampus during positive (>neutral) word encoding compared with control subjects. During negative encoding, increased insular activation was observed in both depressed groups (MDD and MDD + anxiety), whereas increased amygdala and anterior cingulate cortex activation during positive word encoding were observed as depressive state-dependent effects in MDD only. During recognition, anxiety patients showed increased inferior frontal gyrus activation. Overall, effects were unaffected by medication use and regional brain volume. Hippocampal blunting during positive word encoding is a generic effect in depression and anxiety disorders, which may constitute a common vulnerability factor. Increased insular and amygdalar involvement during negative word encoding may underlie heightened experience of, and an inability to disengage from, negative emotions in depressive disorders. Our results emphasize a common neurobiological deficiency in both MDD and anxiety disorders, which may mark a general insensitiveness to positive information. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  14. Tune in to the Tone: Lexical Tone Identification is Associated with Vocabulary and Word Recognition Abilities in Young Chinese Children.

    Science.gov (United States)

    Tong, Xiuli; Tong, Xiuhong; McBride-Chang, Catherine

    2015-12-01

    Lexical tone is one of the most prominent features in the phonological representation of words in Chinese. However, little, if any, research to date has directly evaluated how young Chinese children's lexical tone identification skills contribute to vocabulary acquisition and character recognition. The present study distinguished lexical tones from segmental phonological awareness and morphological awareness in order to estimate the unique contribution of lexical tone in early vocabulary acquisition and character recognition. A sample of 199 Cantonese children aged 5-6 years was assessed on measures of lexical tone identification, segmental phonological awareness, morphological awareness, nonverbal ability, vocabulary knowledge, and Chinese character recognition. It was found that lexical tone awareness and morphological awareness were both associated with vocabulary knowledge and character recognition. However, there was a significant relationship between lexical tone awareness and both vocabulary knowledge and character recognition, even after controlling for the effects of age, nonverbal ability, segmental phonological awareness and morphological awareness. These findings suggest that lexical tone is a key factor accounting for individual variance in young children's lexical acquisition in Chinese, and that lexical tone should be considered in understanding how children learn new Chinese vocabulary words, in either oral or written forms.

  15. Get rich quick: the signal to respond procedure reveals the time course of semantic richness effects during visual word recognition.

    Science.gov (United States)

    Hargreaves, Ian S; Pexman, Penny M

    2014-05-01

    According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. The development of written word processing: the case of deaf children The development of written word processing: the case of deaf children

    Directory of Open Access Journals (Sweden)

    Jacqueline Leybaert

    2008-04-01

    Full Text Available Reading is a highly complex, flexible and sophisticated cognitive activity, and word recognition constitutes only a small and limited part of the whole process. It seems however that for various reasons, word recognition is worth studying separately from other components. Considering that writing systems are secondary codes representing the language, word recognition mechanisms may appear as an interface between printed material and general language capabilities, and thus, specific difficulties in reading and spelling acquisition should be iodated at the level of isolated word identification (see e. g. Crowder, 1982 for discussion. Moreover, it appears that a prominent characteristic of poor readers is their lack of efficiency in the processing of isolated words (Mitche11,1982; Stanovich, 1982. And finally, word recognition seems to be a more automatic and less controlled component of the whole reading process. Reading is a highly complex, flexible and sophisticated cognitive activity, and word recognition constitutes only a small and limited part of the whole process. It seems however that for various reasons, word recognition is worth studying separately from other components. Considering that writing systems are secondary codes representing the language, word recognition mechanisms may appear as an interface between printed material and general language capabilities, and thus, specific difficulties in reading and spelling acquisition should be iodated at the level of isolated word identification (see e. g. Crowder, 1982 for discussion. Moreover, it appears that a prominent characteristic of poor readers is their lack of efficiency in the processing of isolated words (Mitche11,1982; Stanovich, 1982. And finally, word recognition seems to be a more automatic and less controlled component of the whole reading process.

  17. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    Science.gov (United States)

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  18. The influence of print exposure on the body-object interaction effect in visual word recognition.

    Science.gov (United States)

    Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M

    2012-01-01

    We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.

  19. Motivational mechanisms (BAS) and prefrontal cortical activation contribute to recognition memory for emotional words. rTMS effect on performance and EEG (alpha band) measures.

    Science.gov (United States)

    Balconi, Michela; Cobelli, Chiara

    2014-10-01

    The present research addressed the question of where memories for emotional words could be represented in the brain. A second main question was related to the effect of personality traits, in terms of the Behavior Activation System (BAS), in emotional word recognition. We tested the role of the left DLPFC (LDLPFC) by performing a memory task in which old (previously encoded targets) and new (previously not encoded distractors) positive or negative emotional words had to be recognized. High-BAS and low-BAS subjects were compared when a repetitive TMS (rTMS) was applied on the LDLPFC. We found significant differences between high-BAS vs. low-BAS subjects, with better performance for high-BAS in response to positive words. In parallel, an increased left cortical activity (alpha desynchronization) was observed for high-BAS in the case of positive words. Thus, we can conclude that the left approach-related hemisphere, underlying BAS, may support faster recognition of positive words. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. The Effects of Lexical Pitch Accent on Infant Word Recognition in Japanese

    Directory of Open Access Journals (Sweden)

    Mitsuhiko Ota

    2018-01-01

    Full Text Available Learners of lexical tone languages (e.g., Mandarin develop sensitivity to tonal contrasts and recognize pitch-matched, but not pitch-mismatched, familiar words by 11 months. Learners of non-tone languages (e.g., English also show a tendency to treat pitch patterns as lexically contrastive up to about 18 months. In this study, we examined if this early-developing capacity to lexically encode pitch variations enables infants to acquire a pitch accent system, in which pitch-based lexical contrasts are obscured by the interaction of lexical and non-lexical (i.e., intonational features. Eighteen 17-month-olds learning Tokyo Japanese were tested on their recognition of familiar words with the expected pitch or the lexically opposite pitch pattern. In early trials, infants were faster in shifting their eyegaze from the distractor object to the target object than in shifting from the target to distractor in the pitch-matched condition. In later trials, however, infants showed faster distractor-to-target than target-to-distractor shifts in both the pitch-matched and pitch-mismatched conditions. We interpret these results to mean that, in a pitch-accent system, the ability to use pitch variations to recognize words is still in a nascent state at 17 months.

  1. Dynamic Programming Algorithms in Speech Recognition

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNA

    2008-01-01

    Full Text Available In a system of speech recognition containing words, the recognition requires the comparison between the entry signal of the word and the various words of the dictionary. The problem can be solved efficiently by a dynamic comparison algorithm whose goal is to put in optimal correspondence the temporal scales of the two words. An algorithm of this type is Dynamic Time Warping. This paper presents two alternatives for implementation of the algorithm designed for recognition of the isolated words.

  2. How specific are specific comprehension difficulties?

    DEFF Research Database (Denmark)

    Rønberg, Louise Flensted-Jensen; Petersen, Dorthe Klint

    2016-01-01

    as measured on a phonological coding measure. However, the proportion was smaller than the often reported 10-15 % and even smaller when average sight word recognition was also set as a criterion for word reading ability. Compared to average comprehenders, the poor comprehenders’ sight word recognition......This study explores the occurrence of poor comprehenders, i.e., children identified with reading comprehension difficulties in spite of age-appropriate word reading skills. It supports the findings that some children do show poor reading comprehension in spite of age-appropriate word reading...... and daily reading of literary texts were significantly below that of average readers. This study indicates that a lack of reading experience and, likewise, a lack of fluent word reading may be important factors in understanding nine-year-old poor comprehenders’ difficulties....

  3. The Influence of Print Exposure on the Body-Object Interaction Effect in Visual Word Recognition

    Directory of Open Access Journals (Sweden)

    Dana eHansen

    2012-05-01

    Full Text Available We examined the influence of print exposure on the body-object interaction (BOI effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations (Is the word easily imageable?; Experiment 1 or phonological lexical decisions (Does the item sound like a real English word?; Experiment 2. The results from Experiment 1 showed that there was a larger facilitatory BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that a facilitatory BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.

  4. Performance-intensity functions of Mandarin word recognition tests in noise: test dialect and listener language effects.

    Science.gov (United States)

    Liu, Danzheng; Shi, Lu-Feng

    2013-06-01

    This study established the performance-intensity function for Beijing and Taiwan Mandarin bisyllabic word recognition tests in noise in native speakers of Wu Chinese. Effects of the test dialect and listeners' first language on psychometric variables (i.e., slope and 50%-correct threshold) were analyzed. Thirty-two normal-hearing Wu-speaking adults who used Mandarin since early childhood were compared to 16 native Mandarin-speaking adults. Both Beijing and Taiwan bisyllabic word recognition tests were presented at 8 signal-to-noise ratios (SNRs) in 4-dB steps (-12 dB to +16 dB). At each SNR, a half list (25 words) was presented in speech-spectrum noise to listeners' right ear. The order of the test, SNR, and half list was randomized across listeners. Listeners responded orally and in writing. Overall, the Wu-speaking listeners performed comparably to the Mandarin-speaking listeners on both tests. Compared to the Taiwan test, the Beijing test yielded a significantly lower threshold for both the Mandarin- and Wu-speaking listeners, as well as a significantly steeper slope for the Wu-speaking listeners. Both Mandarin tests can be used to evaluate Wu-speaking listeners. Of the 2, the Taiwan Mandarin test results in more comparable functions across listener groups. Differences in the performance-intensity function between listener groups and between tests indicate a first language and dialectal effect, respectively.

  5. Resolving the locus of cAsE aLtErNaTiOn effects in visual word recognition: Evidence from masked priming.

    Science.gov (United States)

    Perea, Manuel; Vergara-Martínez, Marta; Gomez, Pablo

    2015-09-01

    Determining the factors that modulate the early access of abstract lexical representations is imperative for the formulation of a comprehensive neural account of visual-word identification. There is a current debate on whether the effects of case alternation (e.g., tRaIn vs. train) have an early or late locus in the word-processing stream. Here we report a lexical decision experiment using a technique that taps the early stages of visual-word recognition (i.e., masked priming). In the design, uppercase targets could be preceded by an identity/unrelated prime that could be in lowercase or alternating case (e.g., table-TABLE vs. crash-TABLE; tAbLe-TABLE vs. cRaSh-TABLE). Results revealed that the lowercase and alternating case primes were equally effective at producing an identity priming effect. This finding demonstrates that case alternation does not hinder the initial access to the abstract lexical representations during visual-word recognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition.

    Science.gov (United States)

    Stevenson, Ryan A; Nelms, Caitlin E; Baum, Sarah H; Zurkovsky, Lilia; Barense, Morgan D; Newhouse, Paul A; Wallace, Mark T

    2015-01-01

    Over the next 2 decades, a dramatic shift in the demographics of society will take place, with a rapid growth in the population of older adults. One of the most common complaints with healthy aging is a decreased ability to successfully perceive speech, particularly in noisy environments. In such noisy environments, the presence of visual speech cues (i.e., lip movements) provide striking benefits for speech perception and comprehension, but previous research suggests that older adults gain less from such audiovisual integration than their younger peers. To determine at what processing level these behavioral differences arise in healthy-aging populations, we administered a speech-in-noise task to younger and older adults. We compared the perceptual benefits of having speech information available in both the auditory and visual modalities and examined both phoneme and whole-word recognition across varying levels of signal-to-noise ratio. For whole-word recognition, older adults relative to younger adults showed greater multisensory gains at intermediate SNRs but reduced benefit at low SNRs. By contrast, at the phoneme level both younger and older adults showed approximately equivalent increases in multisensory gain as signal-to-noise ratio decreased. Collectively, the results provide important insights into both the similarities and differences in how older and younger adults integrate auditory and visual speech cues in noisy environments and help explain some of the conflicting findings in previous studies of multisensory speech perception in healthy aging. These novel findings suggest that audiovisual processing is intact at more elementary levels of speech perception in healthy-aging populations and that deficits begin to emerge only at the more complex word-recognition level of speech signals. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Computer-Mediated Input, Output and Feedback in the Development of L2 Word Recognition from Speech

    Science.gov (United States)

    Matthews, Joshua; Cheng, Junyu; O'Toole, John Mitchell

    2015-01-01

    This paper reports on the impact of computer-mediated input, output and feedback on the development of second language (L2) word recognition from speech (WRS). A quasi-experimental pre-test/treatment/post-test research design was used involving three intact tertiary level English as a Second Language (ESL) classes. Classes were either assigned to…

  8. Physical Feature Encoding and Word Recognition Abilities Are Altered in Children with Intractable Epilepsy: Preliminary Neuromagnetic Evidence

    Science.gov (United States)

    Pardos, Maria; Korostenskaja, Milena; Xiang, Jing; Fujiwara, Hisako; Lee, Ki H.; Horn, Paul S.; Byars, Anna; Vannest, Jennifer; Wang, Yingying; Hemasilpin, Nat; Rose, Douglas F.

    2015-01-01

    Objective evaluation of language function is critical for children with intractable epilepsy under consideration for epilepsy surgery. The purpose of this preliminary study was to evaluate word recognition in children with intractable epilepsy by using magnetoencephalography (MEG). Ten children with intractable epilepsy (M/F 6/4, mean ± SD 13.4 ± 2.2 years) were matched on age and sex to healthy controls. Common nouns were presented simultaneously from visual and auditory sensory inputs in “match” and “mismatch” conditions. Neuromagnetic responses M1, M2, M3, M4, and M5 with latencies of ~100 ms, ~150 ms, ~250 ms, ~350 ms, and ~450 ms, respectively, elicited during the “match” condition were identified. Compared to healthy children, epilepsy patients had both significantly delayed latency of the M1 and reduced amplitudes of M3 and M5 responses. These results provide neurophysiologic evidence of altered word recognition in children with intractable epilepsy. PMID:26146459

  9. Assessing the Usefulness of Google Books’ Word Frequencies for Psycholinguistic Research on Word Processing

    Science.gov (United States)

    Brysbaert, Marc; Keuleers, Emmanuel; New, Boris

    2011-01-01

    In this Perspective Article we assess the usefulness of Google's new word frequencies for word recognition research (lexical decision and word naming). We find that, despite the massive corpus on which the Google estimates are based (131 billion words from books published in the United States alone), the Google American English frequencies explain 11% less of the variance in the lexical decision times from the English Lexicon Project (Balota et al., 2007) than the SUBTLEX-US word frequencies, based on a corpus of 51 million words from film and television subtitles. Further analyses indicate that word frequencies derived from recent books (published after 2000) are better predictors of word processing times than frequencies based on the full corpus, and that word frequencies based on fiction books predict word processing times better than word frequencies based on the full corpus. The most predictive word frequencies from Google still do not explain more of the variance in word recognition times of undergraduate students and old adults than the subtitle-based word frequencies. PMID:21713191

  10. Reading Big Words: Instructional Practices to Promote Multisyllabic Word Reading Fluency

    Science.gov (United States)

    Toste, Jessica R.; Williams, Kelly J.; Capin, Philip

    2017-01-01

    Poorly developed word recognition skills are the most pervasive and debilitating source of reading challenges for students with learning disabilities (LD). With a notable decrease in word reading instruction in the upper elementary grades, struggling readers receive fewer instructional opportunities to develop proficient word reading skills, yet…

  11. The low-frequency encoding disadvantage: Word frequency affects processing demands.

    Science.gov (United States)

    Diana, Rachel A; Reder, Lynne M

    2006-07-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in addition to the advantage of low-frequency words at retrieval, there is a low-frequency disadvantage during encoding. That is, low-frequency words require more processing resources to be encoded episodically than high-frequency words. Under encoding conditions in which processing resources are limited, low-frequency words show a larger decrement in recognition than high-frequency words. Also, studying items (pictures and words of varying frequencies) along with low-frequency words reduces performance for those stimuli. Copyright 2006 APA, all rights reserved.

  12. Effectiveness of a Phonological Awareness Training Intervention on Word Recognition Ability of Children with Autism Spectrum Disorder

    Science.gov (United States)

    Mohammed, Adel Abdulla; Mostafa, Amaal Ahmed

    2012-01-01

    This study describes an action research project designed to improve word recognition ability of children with Autism Spectrum Disorder. A total of 47 children diagnosed as having Autism Spectrum Disorder using Autism Spectrum Disorder Evaluation Inventory (Mohammed, 2006), participated in this study. The sample was randomly divided into two…

  13. Contribution to automatic speech recognition. Analysis of the direct acoustical signal. Recognition of isolated words and phoneme identification

    International Nuclear Information System (INIS)

    Dupeyrat, Benoit

    1981-01-01

    This report deals with the acoustical-phonetic step of the automatic recognition of the speech. The parameters used are the extrema of the acoustical signal (coded in amplitude and duration). This coding method, the properties of which are described, is simple and well adapted to a digital processing. The quality and the intelligibility of the coded signal after reconstruction are particularly satisfactory. An experiment for the automatic recognition of isolated words has been carried using this coding system. We have designed a filtering algorithm operating on the parameters of the coding. Thus the characteristics of the formants can be derived under certain conditions which are discussed. Using these characteristics the identification of a large part of the phonemes for a given speaker was achieved. Carrying on the studies has required the development of a particular methodology of real time processing which allowed immediate evaluation of the improvement of the programs. Such processing on temporal coding of the acoustical signal is extremely powerful and could represent, used in connection with other methods an efficient tool for the automatic processing of the speech.(author) [fr

  14. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue

    2006-01-01

    of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p......BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. HYPOTHESIS: In contrast we hypothesize...... that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features...

  15. Experience with compound words influences their processing: An eye movement investigation with English compound words.

    Science.gov (United States)

    Juhasz, Barbara J

    2016-11-14

    Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.

  16. Recognition memory of neutral words can be impaired by task-irrelevant emotional encoding contexts: behavioral and electrophysiological evidence.

    Science.gov (United States)

    Zhang, Qin; Liu, Xuan; An, Wei; Yang, Yang; Wang, Yinan

    2015-01-01

    Previous studies on the effects of emotional context on memory for centrally presented neutral items have obtained inconsistent results. And in most of those studies subjects were asked to either make a connection between the item and the context at study or retrieve both the item and the context. When no response for the contexts is required, how emotional contexts influence memory for neutral items is still unclear. Thus, the present study attempted to investigate the influences of four types of emotional picture contexts on recognition memory of neutral words using both behavioral and event-related potential (ERP) measurements. During study, words were superimposed centrally onto emotional contexts, and subjects were asked to just remember the words. During test, both studied and new words were presented without the emotional contexts and subjects had to make "old/new" judgments for those words. The results revealed that, compared with the neutral context, the negative contexts and positive high-arousing context impaired recognition of words. ERP results at encoding demonstrated that, compared with items presented in the neutral context, items in the positive and negative high-arousing contexts elicited more positive ERPs, which probably reflects an automatic process of attention capturing of high-arousing context as well as a conscious and effortful process of overcoming the interference of high-arousing context. During retrieval, significant FN400 old/new effects occurred in conditions of the negative low-arousing, positive, and neutral contexts but not in the negative high-arousing condition. Significant LPC old/new effects occurred in all conditions of context. However, the LPC old/new effect in the negative high-arousing condition was smaller than that in the positive high-arousing and low-arousing conditions. These results suggest that emotional context might influence both the familiarity and recollection processes.

  17. The gender congruency effect during bilingual spoken-word recognition

    Science.gov (United States)

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  18. Word wheels

    CERN Document Server

    Clark, Kathryn

    2013-01-01

    Targeting the specific problems learners have with language structure, these multi-sensory exercises appeal to all age groups including adults. Exercises use sight, sound and touch and are also suitable for English as an Additional Lanaguage and Basic Skills students.Word Wheels includes off-the-shelf resources including lesson plans and photocopiable worksheets, an interactive CD with practice exercises, and support material for the busy teacher or non-specialist staff, as well as homework activities.

  19. Word recognition memory in adults with attention-deficit/hyperactivity disorder as reflected by event-related potentials

    Directory of Open Access Journals (Sweden)

    Vanessa Prox-Vagedes

    2011-03-01

    Full Text Available Objective: Attention-deficit/hyperactivity disorder (ADHD is increasingly diagnosed in adults. In this study we address the question whether there are impairments in recognition memory. Methods: In the present study 13 adults diagnosed with ADHD according to DSM-IV and 13 healthy controls were examined with respect to event-related potentials (ERPs in a visual continuous word recognition paradigm to gain information about recognition memory effects in these patients. Results: The amplitude of one attention-related ERP-component, the N1, was significantly increased for the ADHD adults compared with the healthy controls in the occipital electrodes. The ERPs for the second presentation were significantly more positive than the ERPs for the first presentation. This effect did not significantly differ between groups. Conclusion: Neuronal activity related to an early attentional mechanism appears to be enhanced in ADHD patients. Concerning the early or the late part of the old/new effect ADHD patients show no difference which suggests that there are no differences with respect to recollection and familiarity based recognition processes.

  20. Meaningful Memory in Acute Anorexia Nervosa Patients-Comparing Recall, Learning, and Recognition of Semantically Related and Semantically Unrelated Word Stimuli.

    Science.gov (United States)

    Terhoeven, Valentin; Kallen, Ursula; Ingenerf, Katrin; Aschenbrenner, Steffen; Weisbrod, Matthias; Herzog, Wolfgang; Brockmeyer, Timo; Friederich, Hans-Christoph; Nikendei, Christoph

    2017-03-01

    It is unclear whether observed memory impairment in anorexia nervosa (AN) depends on the semantic structure (categorized words) of material to be encoded. We aimed to investigate the processing of semantically related information in AN. Memory performance was assessed in a recall, learning, and recognition test in 27 adult women with AN (19 restricting, 8 binge-eating/purging subtype; average disease duration: 9.32 years) and 30 healthy controls using an extended version of the Rey Auditory Verbal Learning Test, applying semantically related and unrelated word stimuli. Short-term memory (immediate recall, learning), regardless of semantics of the words, was significantly worse in AN patients, whereas long-term memory (delayed recall, recognition) did not differ between AN patients and controls. Semantics of stimuli do not have a better effect on memory recall in AN compared to CO. Impaired short-term versus long-term memory is discussed in relation to dysfunctional working memory in AN. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  1. The effects of video self-modeling on the decoding skills of children at risk for reading disabilities

    OpenAIRE

    Ayala, SM; O'Connor, R

    2013-01-01

    Ten first grade students who had responded poorly to a Tier 2 reading intervention in a response to intervention (RTI) model received an intervention of video self-modeling to improve decoding skills and sight word recognition. Students were video recorded blending and segmenting decodable words and reading sight words. Videos were edited and viewed a minimum of four times per week. Data were collected twice per week using curriculum-based measures. A single subject multiple baseline across p...

  2. The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words

    Science.gov (United States)

    Hoedemaker, Renske S.; Gordon, Peter C.

    2016-01-01

    In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394

  3. Extending models of visual-word recognition to semicursive scripts: Evidence from masked priming in Uyghur.

    Science.gov (United States)

    Yakup, Mahire; Abliz, Wayit; Sereno, Joan; Perea, Manuel

    2015-12-01

    One basic feature of the Arabic script is its semicursive style: some letters are connected to the next, but others are not, as in the Uyghur word [see text]/ya xʃi/ ("good"). None of the current orthographic coding schemes in models of visual-word recognition, which were created for the Roman script, assign a differential role to the coding of within letter "chunks" and between letter "chunks" in words in the Arabic script. To examine how letter identity/position is coded at the earliest stages of word processing in the Arabic script, we conducted 2 masked priming lexical decision experiments in Uyghur, an agglutinative Turkic language. The target word was preceded by an identical prime, by a transposed-letter nonword prime (that either kept the ligation pattern or did not), or by a 2-letter replacement nonword prime. Transposed-letter primes were as effective as identity primes when the letter transposition in the prime kept the same ligation pattern as the target word (e.g., [see text]/inta_jin/-/itna_jin/), but not when the transposed-letter prime didn't keep the ligation pattern (e.g., [see text]/so_w_ʁa_t/-/so_ʁw_a_t/). Furthermore, replacement-letter primes were more effective when they kept the ligation pattern of the target word than when they did not (e.g., [see text]/so_d_ʧa_t/-/so_w_ʁa_t/ faster than [see text]/so_ʧd_a_t/-/so_w_ʁa_t/). We examined how input coding schemes could be extended to deal with the intricacies of semicursive scripts. (c) 2015 APA, all rights reserved).

  4. Integrated Sight Boresighting

    National Research Council Canada - National Science Library

    Gilstrap, Jeff

    1998-01-01

    ... (IR) pointer into an advanced weapon sight and surveillance system. The Integrated Sight is being developed as a technology demonstrator and potential future upgrade to the Land Warrior and Thermal Weapon Sight Programs...

  5. Short term memory and working memory in blind versus sighted children.

    Science.gov (United States)

    Withagen, Ans; Kappers, Astrid M L; Vervloed, Mathijs P J; Knoors, Harry; Verhoeven, Ludo

    2013-07-01

    There is evidence that blind people may strengthen their memory skills to compensate for absence of vision. However, which aspects of memory are involved is open to debate and a developmental perspective is generally lacking. In the present study, we compared the short term memory (STM) and working memory (WM) of 10-year-old blind children and sighted children. STM was measured using digit span forward, name learning, and word span tasks; WM was measured using listening span and digit span backward tasks. The blind children outperformed their sighted peers on both STM and WM tasks. The enhanced capacity of the blind children on digit span and other STM tasks confirms the results of earlier research; the significantly better performance of the blind children relative to their sighted peers on verbal WM tasks is a new interesting finding. Task characteristics, including the verbal nature of the WM tasks and strategies used to perform these tasks, are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition

    OpenAIRE

    Bettadapura, Vinay; Schindler, Grant; Plotz, Thomaz; Essa, Irfan

    2015-01-01

    We present data-driven techniques to augment Bag of Words (BoW) models, which allow for more robust modeling and recognition of complex long-term activities, especially when the structure and topology of the activities are not known a priori. Our approach specifically addresses the limitations of standard BoW approaches, which fail to represent the underlying temporal and causal information that is inherent in activity streams. In addition, we also propose the use of randomly sampled regular ...

  7. Contextual diversity facilitates learning new words in the classroom.

    Directory of Open Access Journals (Sweden)

    Eva Rosa

    Full Text Available In the field of word recognition and reading, it is commonly assumed that frequently repeated words create more accessible memory traces than infrequently repeated words, thus capturing the word-frequency effect. Nevertheless, recent research has shown that a seemingly related factor, contextual diversity (defined as the number of different contexts [e.g., films] in which a word appears, is a better predictor than word-frequency in word recognition and sentence reading experiments. Recent research has shown that contextual diversity plays an important role when learning new words in a laboratory setting with adult readers. In the current experiment, we directly manipulated contextual diversity in a very ecological scenario: at school, when Grade 3 children were learning words in the classroom. The new words appeared in different contexts/topics (high-contextual diversity or only in one of them (low-contextual diversity. Results showed that words encountered in different contexts were learned and remembered more effectively than those presented in redundant contexts. We discuss the practical (educational [e.g., curriculum design] and theoretical (models of word recognition implications of these findings.

  8. The Effects of Linguistic Context on Word Recognition in Noise by Elderly Listeners Using Spanish Sentence Lists (SSL)

    Science.gov (United States)

    Cervera, Teresa; Rosell, Vicente

    2015-01-01

    This study evaluated the effects of the linguistic context on the recognition of words in noise in older listeners using the Spanish Sentence Lists. These sentences were developed based on the approach of the SPIN test for the English language, which contains high and low predictability (HP and LP) sentences. In addition, the relative contribution…

  9. Reassessing word frequency as a determinant of word recognition for skilled and unskilled readers.

    Science.gov (United States)

    Kuperman, Victor; Van Dyke, Julie A

    2013-06-01

    The importance of vocabulary in reading comprehension emphasizes the need to accurately assess an individual's familiarity with words. The present article highlights problems with using occurrence counts in corpora as an index of word familiarity, especially when studying individuals varying in reading experience. We demonstrate via computational simulations and norming studies that corpus-based word frequencies systematically overestimate strengths of word representations, especially in the low-frequency range and in smaller-size vocabularies. Experience-driven differences in word familiarity prove to be faithfully captured by the subjective frequency ratings collected from responders at different experience levels. When matched on those levels, this lexical measure explains more variance than corpus-based frequencies in eye-movement and lexical decision latencies to English words, attested in populations with varied reading experience and skill. Furthermore, the use of subjective frequencies removes the widely reported (corpus) Frequency × Skill interaction, showing that more skilled readers are equally faster in processing any word than the less skilled readers, not disproportionally faster in processing lower frequency words. This finding challenges the view that the more skilled an individual is in generic mechanisms of word processing, the less reliant he or she will be on the actual lexical characteristics of that word. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  10. Reassessing word frequency as a determinant of word recognition for skilled and unskilled readers

    Science.gov (United States)

    Kuperman, Victor; Van Dyke, Julie A.

    2013-01-01

    The importance of vocabulary in reading comprehension emphasizes the need to accurately assess an individual’s familiarity with words. The present article highlights problems with using occurrence counts in corpora as an index of word familiarity, especially when studying individuals varying in reading experience. We demonstrate via computational simulations and norming studies that corpus-based word frequencies systematically overestimate strengths of word representations, especially in the low-frequency range and in smaller-size vocabularies. Experience-driven differences in word familiarity prove to be faithfully captured by the subjective frequency ratings collected from responders at different experience levels. When matched on those levels, this lexical measure explains more variance than corpus-based frequencies in eye-movement and lexical decision latencies to English words, attested in populations with varied reading experience and skill. Furthermore, the use of subjective frequencies removes the widely reported (corpus) frequency-by-skill interaction, showing that more skilled readers are equally faster in processing any word than the less skilled readers, not disproportionally faster in processing lower-frequency words. This finding challenges the view that the more skilled an individual is in generic mechanisms of word processing the less reliant he/she will be on the actual lexical characteristics of that word. PMID:23339352

  11. Semantic Neighborhood Effects for Abstract versus Concrete Words.

    Science.gov (United States)

    Danguecan, Ashley N; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.

  12. Emotion and memory: a recognition advantage for positive and negative words independent of arousal.

    Science.gov (United States)

    Adelman, James S; Estes, Zachary

    2013-12-01

    Much evidence indicates that emotion enhances memory, but the precise effects of the two primary factors of arousal and valence remain at issue. Moreover, the current knowledge of emotional memory enhancement is based mostly on small samples of extremely emotive stimuli presented in unnaturally high proportions without adequate affective, lexical, and semantic controls. To investigate how emotion affects memory under conditions of natural variation, we tested whether arousal and valence predicted recognition memory for over 2500 words that were not sampled for their emotionality, and we controlled a large variety of lexical and semantic factors. Both negative and positive stimuli were remembered better than neutral stimuli, whether arousing or calming. Arousal failed to predict recognition memory, either independently or interactively with valence. Results support models that posit a facilitative role of valence in memory. This study also highlights the importance of stimulus controls and experimental designs in research on emotional memory. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Recurrent neural networks with specialized word embeddings for health-domain named-entity recognition.

    Science.gov (United States)

    Jauregi Unanue, Iñigo; Zare Borzeshi, Ehsan; Piccardi, Massimo

    2017-12-01

    Previous state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text "feature engineering" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word "embeddings". (i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets. Two deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models. We have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset. We present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Combined ERP/fMRI evidence for early word recognition effects in the posterior inferior temporal gyrus.

    Science.gov (United States)

    Dien, Joseph; Brian, Eric S; Molfese, Dennis L; Gold, Brian T

    2013-10-01

    Two brain regions with established roles in reading are the posterior middle temporal gyrus and the posterior fusiform gyrus (FG). Lesion studies have also suggested that the region located between them, the posterior inferior temporal gyrus (pITG), plays a central role in word recognition. However, these lesion results could reflect disconnection effects since neuroimaging studies have not reported consistent lexicality effects in pITG. Here we tested whether these reported pITG lesion effects are due to disconnection effects or not using parallel Event-related Potentials (ERP)/functional magnetic resonance imaging (fMRI) studies. We predicted that the Recognition Potential (RP), a left-lateralized ERP negativity that peaks at about 200-250 msec, might be the electrophysiological correlate of pITG activity and that conditions that evoke the RP (perceptual degradation) might therefore also evoke pITG activity. In Experiment 1, twenty-three participants performed a lexical decision task (temporally flanked by supraliminal masks) while having high-density 129-channel ERP data collected. In Experiment 2, a separate group of fifteen participants underwent the same task while having fMRI data collected in a 3T scanner. Examination of the ERP data suggested that a canonical RP effect was produced. The strongest corresponding effect in the fMRI data was in the vicinity of the pITG. In addition, results indicated stimulus-dependent functional connectivity between pITG and a region of the posterior FG near the Visual Word Form Area (VWFA) during word compared to nonword processing. These results provide convergent spatiotemporal evidence that the pITG contributes to early lexical access through interaction with the VWFA. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Effects of Word Recognition Training in a Picture-Word Interference Task: Automaticity vs. Speed.

    Science.gov (United States)

    Ehri, Linnea C.

    First and second graders were taught to recognize a set of written words either more accurately or more rapidly. Both before and after word training, they named pictures printed with and without these words as distractors. Of interest was whether training would enhance or diminish the interference created by these words in the picture naming task.…

  16. Intact suppression of increased false recognition in schizophrenia.

    Science.gov (United States)

    Weiss, Anthony P; Dodson, Chad S; Goff, Donald C; Schacter, Daniel L; Heckers, Stephan

    2002-09-01

    Recognition memory is impaired in patients with schizophrenia, as they rely largely on item familiarity, rather than conscious recollection, to make mnemonic decisions. False recognition of novel items (foils) is increased in schizophrenia and may relate to this deficit in conscious recollection. By studying pictures of the target word during encoding, healthy adults can suppress false recognition. This study examined the effect of pictorial encoding on subsequent recognition of repeated foils in patients with schizophrenia. The study included 40 patients with schizophrenia and 32 healthy comparison subjects. After incidental encoding of 60 words or pictures, subjects were tested for recognition of target items intermixed with 60 new foils. These new foils were subsequently repeated following either a two- or 24-word delay. Subjects were instructed to label these repeated foils as new and not to mistake them for old target words. Schizophrenic patients showed greater overall false recognition of repeated foils. The rate of false recognition of repeated foils was lower after picture encoding than after word encoding. Despite higher levels of false recognition of repeated new items, patients and comparison subjects demonstrated a similar degree of false recognition suppression after picture, as compared to word, encoding. Patients with schizophrenia displayed greater false recognition of repeated foils than comparison subjects, suggesting both a decrement of item- (or source-) specific recollection and a consequent reliance on familiarity in schizophrenia. Despite these deficits, presenting pictorial information at encoding allowed schizophrenic subjects to suppress false recognition to a similar degree as the comparison group, implying the intact use of a high-level cognitive strategy in this population.

  17. Beginners Remember Orthography when They Learn to Read Words: The Case of Doubled Letters

    Science.gov (United States)

    Wright, Donna-Marie; Ehri, Linnea C.

    2007-01-01

    Sight word learning and memory were studied to clarify how early during development readers process visual letter patterns that are not dictated by phonology, and whether their word learning is influenced by the legality of letter patterns. Forty kindergartners and first graders were taught to read 12 words containing either single consonants…

  18. ERP profiles for face and word recognition are based on their status in semantic memory not their stimulus category.

    Science.gov (United States)

    Nie, Aiqing; Griffin, Michael; Keinath, Alexander; Walsh, Matthew; Dittmann, Andrea; Reder, Lynne

    2014-04-04

    Previous research has suggested that faces and words are processed and remembered differently as reflected by different ERP patterns for the two types of stimuli. Specifically, face stimuli produced greater late positive deflections for old items in anterior compared to posterior regions, while word stimuli produced greater late positive deflections in posterior compared to anterior regions. Given that words have existing representations in subjects׳ long-term memories (LTM) and that face stimuli used in prior experiments were of unknown individuals, we conducted an ERP study that crossed face and letter stimuli with the presence or absence of a prior (stable or existing) memory representation. During encoding, subjects judged whether stimuli were known (famous face or real word) or not known (unknown person or pseudo-word). A surprise recognition memory test required subjects to distinguish between stimuli that appeared during the encoding phase and stimuli that did not. ERP results were consistent with previous research when comparing unknown faces and words; however, the late ERP pattern for famous faces was more similar to that for words than for unknown faces. This suggests that the critical ERP difference is mediated by whether there is a prior representation in LTM, and not whether the stimulus involves letters or faces. Published by Elsevier B.V.

  19. Different Neural Correlates of Emotion-Label Words and Emotion-Laden Words: An ERP Study

    OpenAIRE

    Zhang, Juan; Wu, Chenggang; Meng, Yaxuan; Yuan, Zhen

    2017-01-01

    It is well-documented that both emotion-label words (e.g., sadness, happiness) and emotion-laden words (e.g., death, wedding) can induce emotion activation. However, the neural correlates of emotion-label words and emotion-laden words recognition have not been examined. The present study aimed to compare the underlying neural responses when processing the two kinds of words by employing event-related potential (ERP) measurements. Fifteen Chinese native speakers were asked to perform a lexical...

  20. Emotion words and categories: evidence from lexical decision.

    Science.gov (United States)

    Scott, Graham G; O'Donnell, Patrick J; Sereno, Sara C

    2014-05-01

    We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion-frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency negative words demonstrated a similar advantage. In Experiments 2a and 2b, explicit categories ("positive," "negative," and "household" items) were specified to participants. Positive words again elicited faster responses than did neutral words. Responses to negative words, however, were no different than those to neutral words, regardless of their frequency. The overall pattern of effects indicates that positive words are always facilitated, frequency plays a greater role in the recognition of negative words, and a "negative" category represents a somewhat disparate set of emotions. These results support the notion that emotion word processing may be moderated by distinct systems.

  1. Recognition without Identification for Words, Pseudowords and Nonwords

    Science.gov (United States)

    Arndt, Jason; Lee, Karen; Flora, David B.

    2008-01-01

    Three experiments examined whether the representations underlying recognition memory familiarity can be episodic in nature. Recognition without identification [Cleary, A. M., & Greene, R. L. (2000). Recognition without identification. "Journal of Experimental Psychology: Learning, Memory, and Cognition," 26, 1063-1069; Peynircioglu, Z. F. (1990).…

  2. Enhanced Recognition and Recall of New Words in 7- and 12-Year-Olds Following a Period of Offline Consolidation

    Science.gov (United States)

    Brown, Helen; Weighall, Anna; Henderson, Lisa M.; Gaskell, M. Gareth

    2012-01-01

    Recent studies of adults have found evidence for consolidation effects in the acquisition of novel words, but little is known about whether such effects are found developmentally. In two experiments, we familiarized children with novel nonwords (e.g., "biscal") and tested their recognition and recall of these items. In Experiment 1, 7-year-olds…

  3. Caffeine improves left hemisphere processing of positive words.

    Science.gov (United States)

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.

  4. Caffeine improves left hemisphere processing of positive words.

    Directory of Open Access Journals (Sweden)

    Lars Kuchinke

    Full Text Available A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.

  5. The Impact of Word-Recognition Practice on the Development of the Listening Comprehension of Intermediate-Level EFL Learners

    Directory of Open Access Journals (Sweden)

    Hossein Navidinia

    2016-05-01

    Full Text Available The present study aims at examining the effect of word-recognition practice on EFL students’ listening comprehension. The participants consisted of 30 intermediate EFL learners studying in a language institute in Birjand City, Iran. They were assigned randomly to two equal groups, control and experimental. Before starting the experiment, the listening section of IELTS was given to all of the students as the pretest. Then, during the experiment, the experimental group was asked to transcribe the listening sections of their course book while in the control group, the students did not transcribe. After 25 sessions (2 hours each of instruction, another test of listening (IELTS proficiency test was given to both groups as the post-test. The results of the two tests were then analyzed and compared using one way ANCOVA test. The results indicated that the experimental group outperformed the control group (p<0.05. Therefore, it was concluded that word-recognition practice is an effective way for the improvement of EFL learners’ listening comprehension. The overall results of the study are discussed and the implications for further research and practitioners are made.

  6. Man machine interface based on speech recognition

    International Nuclear Information System (INIS)

    Jorge, Carlos A.F.; Aghina, Mauricio A.C.; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2007-01-01

    This work reports the development of a Man Machine Interface based on speech recognition. The system must recognize spoken commands, and execute the desired tasks, without manual interventions of operators. The range of applications goes from the execution of commands in an industrial plant's control room, to navigation and interaction in virtual environments. Results are reported for isolated word recognition, the isolated words corresponding to the spoken commands. For the pre-processing stage, relevant parameters are extracted from the speech signals, using the cepstral analysis technique, that are used for isolated word recognition, and corresponds to the inputs of an artificial neural network, that performs recognition tasks. (author)

  7. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    Science.gov (United States)

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  8. Distinct effects of perceptual quality on auditory word recognition, memory formation and recall in a neural model of sequential memory

    Directory of Open Access Journals (Sweden)

    Paul Miller

    2010-06-01

    Full Text Available Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing dependent plasticity (STDP, which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall.

  9. The Pattern Recognition in Cattle Brand using Bag of Visual Words and Support Vector Machines Multi-Class

    Directory of Open Access Journals (Sweden)

    Carlos Silva, Mr

    2018-03-01

    Full Text Available The recognition images of cattle brand in an automatic way is a necessity to governmental organs responsible for this activity. To help this process, this work presents a method that consists in using Bag of Visual Words for extracting of characteristics from images of cattle brand and Support Vector Machines Multi-Class for classification. This method consists of six stages: a select database of images; b extract points of interest (SURF; c create vocabulary (K-means; d create vector of image characteristics (visual words; e train and sort images (SVM; f evaluate the classification results. The accuracy of the method was tested on database of municipal city hall, where it achieved satisfactory results, reporting 86.02% of accuracy and 56.705 seconds of processing time, respectively.

  10. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    Science.gov (United States)

    Lerner, Itamar; Armstrong, Blair C.; Frost, Ram

    2014-01-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  11. Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight.

    Science.gov (United States)

    Cutter, Michael; Manduchi, Roberto

    The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software.

  12. Changes in recognition memory over time: an ERP investigation into vocabulary learning.

    Directory of Open Access Journals (Sweden)

    Shekeila D Palmer

    Full Text Available Although it seems intuitive to assume that recognition memory fades over time when information is not reinforced, some aspects of word learning may benefit from a period of consolidation. In the present study, event-related potentials (ERP were used to examine changes in recognition memory responses to familiar and newly learned (novel words over time. Native English speakers were taught novel words associated with English translations, and subsequently performed a Recognition Memory task in which they made old/new decisions in response to both words (trained word vs. untrained word, and novel words (trained novel word vs. untrained novel word. The Recognition task was performed 45 minutes after training (Day 1 and then repeated the following day (Day 2 with no additional training session in between. For familiar words, the late parietal old/new effect distinguished old from new items on both Day 1 and Day 2, although response to trained items was significantly weaker on Day 2. For novel words, the LPC again distinguished old from new items on both days, but the effect became significantly larger on Day 2. These data suggest that while recognition memory for familiar items may fade over time, recognition of novel items, conscious recollection in particular may benefit from a period of consolidation.

  13. Rapid modulation of spoken word recognition by visual primes.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  14. The locus of word frequency effects in skilled spelling-to-dictation.

    Science.gov (United States)

    Chua, Shi Min; Liow, Susan J Rickard

    2014-01-01

    In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.

  15. Event Recognition Based on Deep Learning in Chinese Texts.

    Directory of Open Access Journals (Sweden)

    Yajun Zhang

    Full Text Available Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM. Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN, then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  16. Event Recognition Based on Deep Learning in Chinese Texts.

    Science.gov (United States)

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  17. Textual emotion recognition for enhancing enterprise computing

    Science.gov (United States)

    Quan, Changqin; Ren, Fuji

    2016-05-01

    The growing interest in affective computing (AC) brings a lot of valuable research topics that can meet different application demands in enterprise systems. The present study explores a sub area of AC techniques - textual emotion recognition for enhancing enterprise computing. Multi-label emotion recognition in text is able to provide a more comprehensive understanding of emotions than single label emotion recognition. A representation of 'emotion state in text' is proposed to encompass the multidimensional emotions in text. It ensures the description in a formal way of the configurations of basic emotions as well as of the relations between them. Our method allows recognition of the emotions for the words bear indirect emotions, emotion ambiguity and multiple emotions. We further investigate the effect of word order for emotional expression by comparing the performances of bag-of-words model and sequence model for multi-label sentence emotion recognition. The experiments show that the classification results under sequence model are better than under bag-of-words model. And homogeneous Markov model showed promising results of multi-label sentence emotion recognition. This emotion recognition system is able to provide a convenient way to acquire valuable emotion information and to improve enterprise competitive ability in many aspects.

  18. The Army word recognition system

    Science.gov (United States)

    Hadden, David R.; Haratz, David

    1977-01-01

    The application of speech recognition technology in the Army command and control area is presented. The problems associated with this program are described as well as as its relevance in terms of the man/machine interactions, voice inflexions, and the amount of training needed to interact with and utilize the automated system.

  19. The Effect of Contingent Reinforcement on the Acquisition of Sight Vocabulary. Technical Report No. 49.

    Science.gov (United States)

    Brandt, Mary E.; And Others

    The present study is a replication of a Lahey and Drabman study (1974) which investigated the effects of contingent versus noncontingent reinforcement on the learning of sight words. The subjects in this study were 14 Kamehameha Early Education Program (KEEP) students who composed the lowest reading group in a combined first-second grade…

  20. The Role of Geminates in Infants' Early Word Production and Word-Form Recognition

    Science.gov (United States)

    Vihman, Marilyn; Majoran, Marinella

    2017-01-01

    Infants learning languages with long consonants, or geminates, have been found to "overselect" and "overproduce" these consonants in early words and also to commonly omit the word-initial consonant. A production study with thirty Italian children recorded at 1;3 and 1;9 strongly confirmed both of these tendencies. To test the…

  1. The Low-Frequency Encoding Disadvantage: Word Frequency Affects Processing Demands

    OpenAIRE

    Diana, Rachel A.; Reder, Lynne M.

    2006-01-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in ad...

  2. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    Science.gov (United States)

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  3. Selective attention and recognition: effects of congruency on episodic learning.

    Science.gov (United States)

    Rosner, Tamara M; D'Angelo, Maria C; MacLellan, Ellen; Milliken, Bruce

    2015-05-01

    Recent research on cognitive control has focused on the learning consequences of high selective attention demands in selective attention tasks (e.g., Botvinick, Cognit Affect Behav Neurosci 7(4):356-366, 2007; Verguts and Notebaert, Psychol Rev 115(2):518-525, 2008). The current study extends these ideas by examining the influence of selective attention demands on remembering. In Experiment 1, participants read aloud the red word in a pair of red and green spatially interleaved words. Half of the items were congruent (the interleaved words had the same identity), and the other half were incongruent (the interleaved words had different identities). Following the naming phase, participants completed a surprise recognition memory test. In this test phase, recognition memory was better for incongruent than for congruent items. In Experiment 2, context was only partially reinstated at test, and again recognition memory was better for incongruent than for congruent items. In Experiment 3, all of the items contained two different words, but in one condition the words were presented close together and interleaved, while in the other condition the two words were spatially separated. Recognition memory was better for the interleaved than for the separated items. This result rules out an interpretation of the congruency effects on recognition in Experiments 1 and 2 that hinges on stronger relational encoding for items that have two different words. Together, the results support the view that selective attention demands for incongruent items lead to encoding that improves recognition.

  4. Transfer of L1 Visual Word Recognition Strategies during Early Stages of L2 Learning: Evidence from Hebrew Learners Whose First Language Is Either Semitic or Indo-European

    Science.gov (United States)

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2016-01-01

    The present study examined visual word recognition processes in Hebrew (a Semitic language) among beginning learners whose first language (L1) was either Semitic (Arabic) or Indo-European (e.g. English). To examine if learners, like native Hebrew speakers, exhibit morphological sensitivity to root and word-pattern morphemes, learners made an…

  5. No Sensory Compensation for Olfactory Memory: Differences between Blind and Sighted People

    Directory of Open Access Journals (Sweden)

    Agnieszka Sorokowska

    2017-12-01

    Full Text Available Blindness can be a driving force behind a variety of changes in sensory systems. When vision is missing, other modalities and higher cognitive functions can become hyper-developed through a mechanism called sensory compensation. Overall, previous studies suggest that olfactory memory in blind people can be better than that of the sighted individuals. Better performance of blind individuals in other-sensory modalities was hypothesized to be a result of, among others, intense perceptual training. At the same time, if the superiority of blind people in olfactory abilities indeed results from training, their scores should not decrease with age to such an extent as among the sighted people. Here, this hypothesis was tested in a large sample of 94 blind individuals. Olfactory memory was assessed using the Test for Olfactory Memory, comprising episodic odor recognition (discriminating previously presented odors from new odors and two forms of semantic memory (cued and free identification of odors. Regarding episodic olfactory memory, we observed an age-related decline in correct hits in blind participants, but an age-related increase in false alarms in sighted participants. Further, age moderated the between-group differences for correct hits, but the direction of the observed effect was contrary to our expectations. The difference between blind and sighted individuals younger than 40 years old was non-significant, but older sighted individuals outperformed their blind counterparts. In conclusion, we found no positive effect of visual impairment on olfactory memory. We suggest that daily perceptual training is not enough to increase olfactory memory function in blind people.

  6. No Sensory Compensation for Olfactory Memory: Differences between Blind and Sighted People.

    Science.gov (United States)

    Sorokowska, Agnieszka; Karwowski, Maciej

    2017-01-01

    Blindness can be a driving force behind a variety of changes in sensory systems. When vision is missing, other modalities and higher cognitive functions can become hyper-developed through a mechanism called sensory compensation. Overall, previous studies suggest that olfactory memory in blind people can be better than that of the sighted individuals. Better performance of blind individuals in other-sensory modalities was hypothesized to be a result of, among others, intense perceptual training. At the same time, if the superiority of blind people in olfactory abilities indeed results from training, their scores should not decrease with age to such an extent as among the sighted people. Here, this hypothesis was tested in a large sample of 94 blind individuals. Olfactory memory was assessed using the Test for Olfactory Memory, comprising episodic odor recognition (discriminating previously presented odors from new odors) and two forms of semantic memory (cued and free identification of odors). Regarding episodic olfactory memory, we observed an age-related decline in correct hits in blind participants, but an age-related increase in false alarms in sighted participants. Further, age moderated the between-group differences for correct hits, but the direction of the observed effect was contrary to our expectations. The difference between blind and sighted individuals younger than 40 years old was non-significant, but older sighted individuals outperformed their blind counterparts. In conclusion, we found no positive effect of visual impairment on olfactory memory. We suggest that daily perceptual training is not enough to increase olfactory memory function in blind people.

  7. No Sensory Compensation for Olfactory Memory: Differences between Blind and Sighted People

    Science.gov (United States)

    Sorokowska, Agnieszka; Karwowski, Maciej

    2017-01-01

    Blindness can be a driving force behind a variety of changes in sensory systems. When vision is missing, other modalities and higher cognitive functions can become hyper-developed through a mechanism called sensory compensation. Overall, previous studies suggest that olfactory memory in blind people can be better than that of the sighted individuals. Better performance of blind individuals in other-sensory modalities was hypothesized to be a result of, among others, intense perceptual training. At the same time, if the superiority of blind people in olfactory abilities indeed results from training, their scores should not decrease with age to such an extent as among the sighted people. Here, this hypothesis was tested in a large sample of 94 blind individuals. Olfactory memory was assessed using the Test for Olfactory Memory, comprising episodic odor recognition (discriminating previously presented odors from new odors) and two forms of semantic memory (cued and free identification of odors). Regarding episodic olfactory memory, we observed an age-related decline in correct hits in blind participants, but an age-related increase in false alarms in sighted participants. Further, age moderated the between-group differences for correct hits, but the direction of the observed effect was contrary to our expectations. The difference between blind and sighted individuals younger than 40 years old was non-significant, but older sighted individuals outperformed their blind counterparts. In conclusion, we found no positive effect of visual impairment on olfactory memory. We suggest that daily perceptual training is not enough to increase olfactory memory function in blind people. PMID:29276494

  8. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Science.gov (United States)

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  9. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Directory of Open Access Journals (Sweden)

    Wei Ji Ma

    Full Text Available Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness, one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  10. Feature activation during word recognition: action, visual, and associative-semantic priming effects

    Directory of Open Access Journals (Sweden)

    Kevin J.Y. Lam

    2015-05-01

    Full Text Available Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1 action features, (2 visual features, or (3 semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100 ms, 250 ms, 400 ms, and 1,000 ms to determine the relative time course of the different features . Notably, action priming effects were found in ISIs of 100 ms, 250 ms, and 1,000 ms whereas a visual priming effect was seen only in the ISI of 1,000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1 demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2 provides new evidence for embodied theories of language.

  11. Information properties of morphologically complex words modulate brain activity during word reading.

    Science.gov (United States)

    Hakala, Tero; Hultén, Annika; Lehtonen, Minna; Lagus, Krista; Salmelin, Riitta

    2018-06-01

    Neuroimaging studies of the reading process point to functionally distinct stages in word recognition. Yet, current understanding of the operations linked to those various stages is mainly descriptive in nature. Approaches developed in the field of computational linguistics may offer a more quantitative approach for understanding brain dynamics. Our aim was to evaluate whether a statistical model of morphology, with well-defined computational principles, can capture the neural dynamics of reading, using the concept of surprisal from information theory as the common measure. The Morfessor model, created for unsupervised discovery of morphemes, is based on the minimum description length principle and attempts to find optimal units of representation for complex words. In a word recognition task, we correlated brain responses to word surprisal values derived from Morfessor and from other psycholinguistic variables that have been linked with various levels of linguistic abstraction. The magnetoencephalography data analysis focused on spatially, temporally and functionally distinct components of cortical activation observed in reading tasks. The early occipital and occipito-temporal responses were correlated with parameters relating to visual complexity and orthographic properties, whereas the later bilateral superior temporal activation was correlated with whole-word based and morphological models. The results show that the word processing costs estimated by the statistical Morfessor model are relevant for brain dynamics of reading during late processing stages. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  12. Emotion words and categories: evidence from lexical decision

    OpenAIRE

    Scott, Graham; O'Donnell, Patrick; Sereno, Sara C.

    2014-01-01

    We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion–frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency nega...

  13. Emotionally enhanced memory for negatively arousing words: storage or retrieval advantage?

    Science.gov (United States)

    Nadarevic, Lena

    2017-12-01

    People typically remember emotionally negative words better than neutral words. Two experiments are reported that investigate whether emotionally enhanced memory (EEM) for negatively arousing words is based on a storage or retrieval advantage. Participants studied non-word-word pairs that either involved negatively arousing or neutral target words. Memory for these target words was tested by means of a recognition test and a cued-recall test. Data were analysed with a multinomial model that allows the disentanglement of storage and retrieval processes in the present recognition-then-cued-recall paradigm. In both experiments the multinomial analyses revealed no storage differences between negatively arousing and neutral words but a clear retrieval advantage for negatively arousing words in the cued-recall test. These findings suggest that EEM for negatively arousing words is driven by associative processes.

  14. Symbol recognition produced by points of tactile stimulation: the illusion of linear continuity.

    Science.gov (United States)

    Gonzales, G R

    1996-11-01

    To determine whether tactile receptive communication is possible through the use of a mechanical device that produces the phi phenomenon on the body surface. Twenty-six subjects (11 blind and 15 sighted participants) were tested with use of a tactile communication device (TCD) that produces an illusion of linear continuity forming numbers on the dorsal aspect of the wrist. Recognition of a number or number set was the goal. A TCD with protruding and vibrating solenoids produced sequentially delivered points of cutaneous stimulation along a pattern resembling numbers and created the illusion of dragging a vibrating stylet to form numbers, similar to what might be felt by testing for graphesthesia. Blind subjects recognized numbers with fewer trials than did sighted subjects, although all subjects were able to recognize all the numbers produced by the TCD. Subjects who had been blind since birth and had no prior tactile exposure to numbers were able to draw the numbers after experiencing them delivered by the TCD even though they did not recognize their meaning. The phi phenomenon is probably responsible for the illusion of continuous lines in the shape of numbers as produced by the TCD. This tactile illusion could potentially be used for more complex tactile communications such as letters and words.

  15. Morphological Family Size Effects in Young First and Second Language Learners: Evidence of Cross-Language Semantic Activation in Visual Word Recognition

    Science.gov (United States)

    de Zeeuw, Marlies; Verhoeven, Ludo; Schreuder, Robert

    2012-01-01

    This study examined to what extent young second language (L2) learners showed morphological family size effects in L2 word recognition and whether the effects were grade-level related. Turkish-Dutch bilingual children (L2) and Dutch (first language, L1) children from second, fourth, and sixth grade performed a Dutch lexical decision task on words…

  16. The impact of inverted text on visual word processing: An fMRI study.

    Science.gov (United States)

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Brain Network Involved in the Recognition of Facial Expressions of Emotion in the Early Blind

    Directory of Open Access Journals (Sweden)

    Ryo Kitada

    2011-10-01

    Full Text Available Previous studies suggest that the brain network responsible for the recognition of facial expressions of emotion (FEEs begins to emerge early in life. However, it has been unclear whether visual experience of faces is necessary for the development of this network. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI experiments to test the hypothesis that the brain network underlying the recognition of FEEs is not dependent on visual experience of faces. Early-blind, late-blind and sighted subjects participated in the psychophysical experiment. Regardless of group, subjects haptically identified basic FEEs at above-chance levels, without any feedback training. In the subsequent fMRI experiment, the early-blind and sighted subjects haptically identified facemasks portraying three different FEEs and casts of three different shoe types. The sighted subjects also completed a visual task that compared the same stimuli. Within the brain regions activated by the visually-identified FEEs (relative to shoes, haptic identification of FEEs (relative to shoes by the early-blind and sighted individuals activated the posterior middle temporal gyrus adjacent to the superior temporal sulcus, the inferior frontal gyrus, and the fusiform gyrus. Collectively, these results suggest that the brain network responsible for FEE recognition can develop without any visual experience of faces.

  18. Lexical association and false memory for words in two cultures.

    Science.gov (United States)

    Lee, Yuh-shiow; Chiang, Wen-Chi; Hung, Hsu-Ching

    2008-01-01

    This study examined the relationship between language experience and false memory produced by the DRM paradigm. The word lists used in Stadler, et al. (Memory & Cognition, 27, 494-500, 1999) were first translated into Chinese. False recall and false recognition for critical non-presented targets were then tested on a group of Chinese users. The average co-occurrence rate of the list word and the critical word was calculated based on two large Chinese corpuses. List-level analyses revealed that the correlation between the American and Taiwanese participants was significant only in false recognition. More importantly, the co-occurrence rate was significantly correlated with false recall and recognition of Taiwanese participants, and not of American participants. In addition, the backward association strength based on Nelson et al. (The University of South Florida word association, rhyme and word fragment norms, 1999) was significantly correlated with false recall of American participants and not of Taiwanese participants. Results are discussed in terms of the relationship between language experiences and lexical association in creating false memory for word lists.

  19. Speaker information affects false recognition of unstudied lexical-semantic associates.

    Science.gov (United States)

    Luthra, Sahil; Fox, Neal P; Blumstein, Sheila E

    2018-05-01

    Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.

  20. Teaching braille letters, numerals, punctuation, and contractions to sighted individuals.

    Science.gov (United States)

    Putnam, Brittany C; Tiger, Jeffrey H

    2015-01-01

    Braille-character recognition is one of the foundational skills required for teachers of braille. Prior research has evaluated computer programming for teaching braille-to-print letter relations (e.g., Scheithauer & Tiger, 2012). In the current study, we developed a program (the Visual Braille Trainer) to teach not only letters but also numerals, punctuation, symbols, and contractions; we evaluated this program with 4 sighted undergraduate participants. Exposure to this program resulted in mastery of all braille-to-print relations for each participant. © Society for the Experimental Analysis of Behavior.

  1. Processing Electromyographic Signals to Recognize Words

    Science.gov (United States)

    Jorgensen, C. C.; Lee, D. D.

    2009-01-01

    A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.

  2. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition

    Science.gov (United States)

    Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher

    2012-01-01

    This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.

  3. Novel second language words and asymmetric lexical access

    NARCIS (Netherlands)

    Escudero, P.; Hayes-Harb, R.; Mitterer, H.

    2008-01-01

    The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English

  4. Electrophysiological correlates of word recognition memory process in patients with ischemic left ventricular dysfunction.

    Science.gov (United States)

    Giovannelli, Fabio; Simoni, David; Gavazzi, Gioele; Giganti, Fiorenza; Olivotto, Iacopo; Cincotta, Massimo; Pratesi, Alessandra; Baldasseroni, Samuele; Viggiano, Maria Pia

    2016-09-01

    The relationship between left ventricular ejection fraction (LVEF) and cognitive performance in patients with coronary artery disease without overt heart failure is still under debate. In this study we combine behavioral measures and event-related potentials (ERPs) to verify whether electrophysiological correlates of recognition memory (old/new effect) are modulated differently as a function of LVEF. Twenty-three male patients (12 without [LVEF>55%] and 11 with [LVEF25 were enrolled. ERPs were recorded while participants performed an old/new visual word recognition task. A late positive ERP component between 350 and 550ms was differentially modulated in the two groups: a clear old/new effect (enhanced mean amplitude for old respect to new items) was observed in patients without LVEF dysfunction; whereas patients with overt LVEF dysfunction did not show such effect. In contrast, no significant differences emerged for behavioral performance and neuropsychological evaluations. These data suggest that ERPs may reveal functional brain abnormalities that are not observed at behavioral level. Detecting sub-clinical measures of cognitive decline may contribute to set appropriate treatments and to monitor asymptomatic or mildly symptomatic patients with LVEF dysfunction. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. The effects of age and divided attention on spontaneous recognition.

    Science.gov (United States)

    Anderson, Benjamin A; Jacoby, Larry L; Thomas, Ruthann C; Balota, David A

    2011-05-01

    Studies of recognition typically involve tests in which the participant's memory for a stimulus is directly questioned. There are occasions however, in which memory occurs more spontaneously (e.g., an acquaintance seeming familiar out of context). Spontaneous recognition was investigated in a novel paradigm involving study of pictures and words followed by recognition judgments on stimuli with an old or new word superimposed over an old or new picture. Participants were instructed to make their recognition decision on either the picture or word and to ignore the distracting stimulus. Spontaneous recognition was measured as the influence of old vs. new distracters on target recognition. Across two experiments, older adults and younger adults placed under divided-attention showed a greater tendency to spontaneously recognize old distracters as compared to full-attention younger adults. The occurrence of spontaneous recognition is discussed in relation to ability to constrain retrieval to goal-relevant information.

  6. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  7. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.

    Science.gov (United States)

    Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T

    2017-07-01

    Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International

  8. Evidence for simultaneous syntactic processing of multiple words during reading

    NARCIS (Netherlands)

    Snell, Joshua; Meeter, Martijn; Grainger, Jonathan

    2017-01-01

    A hotly debated issue in reading research concerns the extent to which readers process parafoveal words, and how parafoveal information might influence foveal word recognition. We investigated syntactic word processing both in sentence reading and in reading isolated foveal words when these were

  9. The role of semantic and phonological factors in word recognition: an ERP cross-modal priming study of derivational morphology.

    Science.gov (United States)

    Kielar, Aneta; Joanisse, Marc F

    2011-01-01

    Theories of morphological processing differ on the issue of how lexical and grammatical information are stored and accessed. A key point of contention is whether complex forms are decomposed during recognition (e.g., establish+ment), compared to forms that cannot be analyzed into constituent morphemes (e.g., apartment). In the present study, we examined these issues with respect to English derivational morphology by measuring ERP responses during a cross-modal priming lexical decision task. ERP priming effects for semantically and phonologically transparent derived words (government-govern) were compared to those of semantically opaque derived words (apartment-apart) as well as "quasi-regular" items that represent intermediate cases of morphological transparency (dresser-dress). Additional conditions independently manipulated semantic and phonological relatedness in non-derived words (semantics: couch-sofa; phonology: panel-pan). The degree of N400 ERP priming to morphological forms varied depending on the amount of semantic and phonological overlap between word types, rather than respecting a bivariate distinction between derived and opaque forms. Moreover, these effects could not be accounted for by semantic or phonological relatedness alone. The findings support the theory that morphological relatedness is graded rather than absolute, and depend on the joint contribution of form and meaning overlap. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Affective orientation influences memory for emotional and neutral words.

    Science.gov (United States)

    Greenberg, Seth N; Tokarev, Julian; Estes, Zachary

    2012-01-01

    Memory is better for emotional words than for neutral words, but the conditions contributing to emotional memory improvement are not entirely understood. Elsewhere, it has been observed that retrieval of a word is easier when its attributes are congruent with a property assessed during an earlier judgment task. The present study examined whether affective assessment of a word matters to its remembrance. Two experiments were run, one in which only valence assessment was performed, and another in which valence assessment was combined with a running recognition for list words. In both experiments, some participants judged whether each word in a randomized list was negative (negative monitoring), and others judged whether each was positive (positive monitoring). We then tested their explicit memory for the words via both free recall and delayed recognition. Both experiments revealed an affective congruence effect, such that negative words were more likely to be recalled and recognized after negative monitoring, whereas positive words likewise benefited from positive monitoring. Memory for neutral words was better after negative monitoring than positive monitoring.Thus, memory for both emotional and neutral words is contingent on one's affective orientation during encoding.

  11. Word-stem priming and recognition in type 2 diabetes mellitus, Alzheimer's disease patients and healthy older adults.

    Science.gov (United States)

    Redondo, María Teresa; Beltrán-Brotóns, José Luís; Reales, José Manuel; Ballesteros, Soledad

    2015-11-01

    The present study investigated (a) whether the pattern of performance on implicit and explicit memory of patients with type 2 diabetes mellitus (DM2) is more similar to those of patients with Alzheimer's disease (AD) or to cognitively normal older adults and (b) whether glycosylated hemoglobin levels (a measure of glucose regulation) are related to performance on the two memory tasks, implicit word-stem completion and "old-new" recognition. The procedures of both memory tasks included encoding and memory test phases separated by a short delay. Three groups of participants (healthy older adults, DM2 patients and AD patients) completed medical and psychological assessments and performed both memory tasks on a computer. The results of the word-stem completion task showed similar implicit memory in the three groups. By contrast, explicit recognition of the three groups differed. Implicit memory was not affected by either normal or pathological aging, but explicit memory deteriorated in the two groups of patients, especially in AD patients, showing a severe impairment compared to the cognitively healthy older adults. Importantly, glycosylated hemoglobin levels were not related to performance on either implicit or explicit memory tasks. These findings revealed a clear dissociation between explicit and implicit memory tasks in normal and pathological aging. Neuropsychologists and clinicians working with TM2 patients should be aware that the decline of voluntary, long-term explicit memory could have a negative impact on their treatment management. By contrast, the intact implicit memory of the two clinical groups could be used in rehabilitation.

  12. SUBTLEX-ESP: Spanish Word Frequencies Based on Film Subtitles

    Science.gov (United States)

    Cuetos, Fernando; Glez-Nosti, Maria; Barbon, Analia; Brysbaert, Marc

    2011-01-01

    Recent studies have shown that word frequency estimates obtained from films and television subtitles are better to predict performance in word recognition experiments than the traditional word frequency estimates based on books and newspapers. In this study, we present a subtitle-based word frequency list for Spanish, one of the most widely spoken…

  13. 220 Names/Faces 220 Dolch Words Are Too Many for Students with Memories Like Mine. AVKO "Great Idea" Reprint Series No. 601.

    Science.gov (United States)

    McCabe, Don

    This booklet discusses a procedure to assist students experiencing difficulty in learning the "Dolch Basic Sight Vocabulary of 220 Words" and rearranges a list of 220 words to make it easier for students to learn. The procedure discussed in the booklet is based on the "word family" approach, in which words like "all call,…

  14. Handwriting versus Keyboard Writing: Effect on Word Recall

    Directory of Open Access Journals (Sweden)

    Anne Mangen

    2015-10-01

    Full Text Available The objective of this study was to explore effects of writing modality on word recall and recognition. The following three writing modalities were used: handwriting with pen on paper; typewriting on a conventional laptop keyboard; and typewriting on an iPad touch keyboard. Thirty-six females aged 19-54 years participated in a fully counterbalanced within-subjects experimental design. Using a wordlist paradigm, participants were instructed to write down words (one list per writing modality read out loud to them, in the three writing modalities. Memory for words written using handwriting, a conventional keyboard and a virtual iPad keyboard was assessed using oral free recall and recognition. The data was analyzed using non-parametric statistics. Results show that there was an omnibus effect of writing modality and follow-up analyses showed that, for the free recall measure, participants had significantly better free recall of words written in the handwriting condition, compared to both keyboard writing conditions. There was no effect of writing modality in the recognition condition. This indicates that, with respect to aspects of word recall, there may be certain cognitive benefits to handwriting which may not be fully retained in keyboard writing. Cognitive and educational implications of this finding are discussed.

  15. Comparing word and face recognition

    DEFF Research Database (Denmark)

    Robotham, Ro Julia; Starrfelt, Randi

    2017-01-01

    included, as a control, which makes designing experiments all the more challenging. Three main strategies have been used to overcome this problem, each of which has limitations: 1) Compare performances on typical tests of the three stimulus types (e.g., a Face Memory Test, an Object recognition test...... this framework to classify tests and experiments aiming to compare processing across these categories, it becomes apparent that core differences in characteristics (visual and semantic) between the stimuli make the problem of designing comparable tests an insoluble conundrum. By analyzing the experimental...

  16. Semantic size does not matter: "bigger" words are not recognized faster.

    Science.gov (United States)

    Kang, Sean H K; Yap, Melvin J; Tse, Chi-Shing; Kurby, Christopher A

    2011-06-01

    Sereno, O'Donnell, and Sereno (2009) reported that words are recognized faster in a lexical decision task when their referents are physically large than when they are small, suggesting that "semantic size" might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for "big" words (cf. "small" words), even though we used the same word stimuli as Sereno et al. and had almost three times as many subjects. We also examined existing data from visual word recognition megastudies (e.g., English Lexicon Project) and found that semantic size is not a significant predictor of lexical decision performance after controlling for the standard lexical variables. In summary, the null results from our lab experiment--despite a much larger subject sample size than Sereno et al.--converged with our analysis of megastudy lexical decision performance, leading us to conclude that semantic size does not matter for word recognition. Discussion focuses on why semantic size (unlike some other semantic variables) is unlikely to play a role in lexical decision.

  17. Measuring Ability in Foreign Language Word Recognition: A Novel Test and An Alternative to Segalowitz's "CV-rt" Fluency Index

    OpenAIRE

    Coulson, David

    2011-01-01

    Tests of word-recognition speed (lexical accessibility) for second language learners have become more common in recent years as its importance in lexical processing has become apparent. However, the very short reaction-time latencies mean they are often complicated to handle or set up in school-based testing situations. They may also produce data that is hard to interpret or which lacks construct validity. Our solution to this problem is a quick-and-easy test called Q_Lex which can be used by...

  18. The word frequency effect in first- and second-language word recognition: A lexical entrenchment account

    NARCIS (Netherlands)

    Diependaele, K.; Lemhöfer, K.M.L.; Brysbaert, M.

    2013-01-01

    We investigate the origin of differences in the word frequency effect between native speakers and second-language speakers. In a large-scale analysis of English word identification times we find that group-level differences are fully accounted for by the individual language proficiency scores.

  19. Word attributes and lateralization revisited: implications for dual coding and discrete versus continuous processing.

    Science.gov (United States)

    Boles, D B

    1989-01-01

    Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.

  20. The effect of Trier Social Stress Test (TSST on item and associative recognition of words and pictures in healthy participants

    Directory of Open Access Journals (Sweden)

    Jonathan eGuez

    2016-04-01

    Full Text Available Psychological stress, induced by the Trier Social Stress Test (TSST, has repeatedly been shown to alter memory performance. Although factors influencing memory performance such as stimulus nature (verbal /pictorial and emotional valence have been extensively studied, results whether stress impairs or improves memory are still inconsistent. This study aimed at exploring the effect of TSST on item versus associative memory for neutral, verbal, and pictorial stimuli. 48 healthy subjects were recruited, 24 participants were randomly assigned to the TSST group and the remaining 24 participants were assigned to the control group. Stress reactivity was measured by psychological (subjective state anxiety ratings and physiological (Galvanic skin response recording measurements. Subjects performed an item-association memory task for both stimulus types (words, pictures simultaneously, before, and after the stress/non-stress manipulation. The results showed that memory recognition for pictorial stimuli was higher than for verbal stimuli. Memory for both words and pictures was impaired following TSST; while the source for this impairment was specific to associative recognition in pictures, a more general deficit was observed for verbal material, as expressed in decreased recognition for both items and associations following TSST. Response latency analysis indicated that the TSST manipulation decreased response time but at the cost of memory accuracy. We conclude that stress does not uniformly affect memory; rather it interacts with the task’s cognitive load and stimulus type. Applying the current study results to patients diagnosed with disorders associated with traumatic stress, our findings in healthy subjects under acute stress provide further support for our assertion that patients’ impaired memory originates in poor recollection processing following depletion of attentional resources.

  1. Visual Recognition and Its Application to Robot Arm Control

    Directory of Open Access Journals (Sweden)

    Jih-Gau Juang

    2015-10-01

    Full Text Available This paper presents an application of optical word recognition and fuzzy control to a smartphone automatic test system. The system consists of a robot arm and two webcams. After the words from the control panel that represent commands are recognized by the robot system, the robot arm performs the corresponding actions to test the smartphone. One of the webcams is utilized to capture commands on the screen of the control panel, the other to recognize the words on the screen of the tested smartphone. The method of image processing is based on the Red-Green-Blue (RGB and Hue-Saturation-Luminance (HSL color spaces to reduce the influence of light. Fuzzy theory is used in the robot arm’s position control. The Optical Character Recognition (OCR technique is applied to the word recognition, and the recognition results are then checked by a dictionary process to increase the recognition accuracy. The camera which is used to recognize the tested smartphone also provides object coordinates to the fuzzy controller, then the robot arm moves to the desired positions and presses the desired buttons. The proposed control scheme allows the robot arm to perform different assigned test functions successfully.

  2. Word skipping: effects of word length, predictability, spelling and reading skill.

    Science.gov (United States)

    Slattery, Timothy J; Yates, Mark

    2017-08-31

    Readers eyes often skip over words as they read. Skipping rates are largely determined by word length; short words are skipped more than long words. However, the predictability of a word in context also impacts skipping rates. Rayner, Slattery, Drieghe and Liversedge (2011) reported an effect of predictability on word skipping for even long words (10-13 characters) that extend beyond the word identification span. Recent research suggests that better readers and spellers have an enhanced perceptual span (Veldre & Andrews, 2014). We explored whether reading and spelling skill interact with word length and predictability to impact word skipping rates in a large sample (N=92) of average and poor adult readers. Participants read the items from Rayner et al. (2011) while their eye movements were recorded. Spelling skill (zSpell) was assessed using the dictation and recognition tasks developed by Sally Andrews and colleagues. Reading skill (zRead) was assessed from reading speed (words per minute) and accuracy of three 120 word passages each with 10 comprehension questions. We fit linear mixed models to the target gaze duration data and generalized linear mixed models to the target word skipping data. Target word gaze durations were significantly predicted by zRead while, the skipping likelihoods were significantly predicted by zSpell. Additionally, for gaze durations, zRead significantly interacted with word predictability as better readers relied less on context to support word processing. These effects are discussed in relation to the lexical quality hypothesis and eye movement models of reading.

  3. Usage of semantic representations in recognition memory.

    Science.gov (United States)

    Nishiyama, Ryoji; Hirano, Tetsuji; Ukita, Jun

    2017-11-01

    Meanings of words facilitate false acceptance as well as correct rejection of lures in recognition memory tests, depending on the experimental context. This suggests that semantic representations are both directly and indirectly (i.e., mediated by perceptual representations) used in remembering. Studies using memory conjunction errors (MCEs) paradigms, in which the lures consist of component parts of studied words, have reported semantic facilitation of rejection of the lures. However, attending to components of the lures could potentially cause this. Therefore, we investigated whether semantic overlap of lures facilitates MCEs using Japanese Kanji words in which a whole-word image is more concerned in reading. Experiments demonstrated semantic facilitation of MCEs in a delayed recognition test (Experiment 1), and in immediate recognition tests in which participants were prevented from using phonological or orthographic representations (Experiment 2), and the salient effect on individuals with high semantic memory capacities (Experiment 3). Additionally, analysis of the receiver operating characteristic suggested that this effect is attributed to familiarity-based memory judgement and phantom recollection. These findings indicate that semantic representations can be directly used in remembering, even when perceptual representations of studied words are available.

  4. The optimal viewing position in face recognition.

    Science.gov (United States)

    Hsiao, Janet H; Liu, Tina T

    2012-02-28

    In English word recognition, the best recognition performance is usually obtained when the initial fixation is directed to the left of the center (optimal viewing position, OVP). This effect has been argued to involve an interplay of left hemisphere lateralization for language processing and the perceptual experience of fixating at word beginnings most often. While both factors predict a left-biased OVP in visual word recognition, in face recognition they predict contrasting biases: People prefer to fixate the left half-face, suggesting that the OVP should be to the left of the center; nevertheless, the right hemisphere lateralization in face processing suggests that the OVP should be to the right of the center in order to project most of the face to the right hemisphere. Here, we show that the OVP in face recognition was to the left of the center, suggesting greater influence from the perceptual experience than hemispheric asymmetry in central vision. In contrast, hemispheric lateralization effects emerged when faces were presented away from the center; there was an interaction between presented visual field and location (center vs. periphery), suggesting differential influence from perceptual experience and hemispheric asymmetry in central and peripheral vision.

  5. [Environmental context effects of background colors on recognition memory].

    Science.gov (United States)

    Isarida, Takeo; Ozecki, Kousuke

    2005-02-01

    Three experiments examined whether or not switching study background-color contexts among target words at testing reduces word-recognition performance. These experiments also examined whether or not presentation rate--one of the determinants of item strength--interacted with background-color context. Undergraduates learned 40 target words presented at a rate of 1.5 or 3.0 seconds per word in one of two background-color contexts in Experiment 1, and in one of ten contexts in Experiments 2 and 3. Recognition of the targets was tested by mixing 40 distractor words with the targets immediately after the learning session in Experiments 1 and 2, and with a 5-minute filled retention interval in Experiment 3. Experiment 1 failed to find background-color context effects on recognition, but Experiments 2 and 3 successfully found the context effects. Presentation rate did not interact with the context effects. The results conflict with the ICE theory. The implications of the present findings are discussed.

  6. Speech variability effects on recognition accuracy associated with concurrent task performance by pilots

    Science.gov (United States)

    Simpson, C. A.

    1985-01-01

    In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.

  7. The Role of Word Recognition, Oral Reading Fluency and Listening Comprehension in the Simple View of Reading: A Study in an Intermediate Depth Orthography

    Science.gov (United States)

    Cadime, Irene; Rodrigues, Bruna; Santos, Sandra; Viana, Fernanda Leopoldina; Chaves-Sousa, Séli; do Céu Cosme, Maria; Ribeiro, Iolanda

    2017-01-01

    Empirical research has provided evidence for the simple view of reading across a variety of orthographies, but the role of oral reading fluency in the model is unclear. Moreover, the relative weight of listening comprehension, oral reading fluency and word recognition in reading comprehension seems to vary across orthographies and schooling years.…

  8. Deep Belief Networks Based Toponym Recognition for Chinese Text

    Directory of Open Access Journals (Sweden)

    Shu Wang

    2018-06-01

    Full Text Available In Geographical Information Systems, geo-coding is used for the task of mapping from implicitly geo-referenced data to explicitly geo-referenced coordinates. At present, an enormous amount of implicitly geo-referenced information is hidden in unstructured text, e.g., Wikipedia, social data and news. Toponym recognition is the foundation of mining this useful geo-referenced information by identifying words as toponyms in text. In this paper, we propose an adapted toponym recognition approach based on deep belief network (DBN by exploring two key issues: word representation and model interpretation. A Skip-Gram model is used in the word representation process to represent words with contextual information that are ignored by current word representation models. We then determine the core hyper-parameters of the DBN model by illustrating the relationship between the performance and the hyper-parameters, e.g., vector dimensionality, DBN structures and probability thresholds. The experiments evaluate the performance of the Skip-Gram model implemented by the Word2Vec open-source tool, determine stable hyper-parameters and compare our approach with a conditional random field (CRF based approach. The experimental results show that the DBN model outperforms the CRF model with smaller corpus. When the corpus size is large enough, their statistical metrics become approaching. However, their recognition results express differences and complementarity on different kinds of toponyms. More importantly, combining their results can directly improve the performance of toponym recognition relative to their individual performances. It seems that the scale of the corpus has an obvious effect on the performance of toponym recognition. Generally, there is no adequate tagged corpus on specific toponym recognition tasks, especially in the era of Big Data. In conclusion, we believe that the DBN-based approach is a promising and powerful method to extract geo

  9. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  10. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  11. Effects of hydrocortisone on false memory recognition in healthy men and women.

    Science.gov (United States)

    Duesenberg, Moritz; Weber, Juliane; Schaeuffele, Carmen; Fleischer, Juliane; Hellmann-Regen, Julian; Roepke, Stefan; Moritz, Steffen; Otte, Christian; Wingenfeld, Katja

    2016-12-01

    Most of the studies focusing on the effect of stress on false memories by using psychosocial and physiological stressors yielded diverse results. In the present study, we systematically tested the effect of exogenous hydrocortisone using a false memory paradigm. In this placebo-controlled study, 37 healthy men and 38 healthy women (mean age 24.59 years) received either 10 mg of hydrocortisone or placebo 75 min before using the false memory, that is, Deese-Roediger-McDermott (DRM), paradigm. We used emotionally charged and neutral DRM-based word lists to look for false recognition rates in comparison to true recognition rates. Overall, we expected an increase in false memory after hydrocortisone compared to placebo. No differences between the cortisol and the placebo group were revealed for false and for true recognition performance. In general, false recognition rates were lower compared to true recognition rates. Furthermore, we found a valence effect (neutral, positive, negative, disgust word stimuli), indicating higher rates of true and false recognition for emotional compared to neutral words. We further found an interaction effect between sex and recognition. Post hoc t tests showed that for true recognition women showed a significantly better memory performance than men, independent of treatment. This study does not support the hypothesis that cortisol decreases the ability to distinguish between old versus novel words in young healthy individuals. However, sex and emotional valence of word stimuli appear to be important moderators. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Bilingual visual word recognition and lexical access

    NARCIS (Netherlands)

    Dijkstra, A.F.J.; Kroll, J.F.; Groot, A.M.B. de

    2005-01-01

    In spite of the intuition of many bilinguals, a review of empirical studies indicates that during reading under many circumstances, possible words from different languages temporarily become active. Such evidence for "language non-selective lexical access" is found using stimulus materials of

  13. Source memory enhancement for emotional words.

    Science.gov (United States)

    Doerksen, S; Shimamura, A P

    2001-03-01

    The influence of emotional stimuli on source memory was investigated by using emotionally valenced words. The words were colored blue or yellow (Experiment 1) or surrounded by a blue or yellow frame (Experiment 2). Participants were asked to associate the words with the colors. In both experiments, emotionally valenced words elicited enhanced free recall compared with nonvalenced words; however, recognition memory was not affected. Source memory for the associated color was also enhanced for emotional words, suggesting that even memory for contextual information is benefited by emotional stimuli. This effect was not due to the ease of semantic clustering of emotional words because semantically related words were not associated with enhanced source memory, despite enhanced recall (Experiment 3). It is suggested that enhancement resulted from facilitated arousal or attention, which may act to increase organization processes important for source memory.

  14. Device-Free Indoor Activity Recognition System

    Directory of Open Access Journals (Sweden)

    Mohammed Abdulaziz Aide Al-qaness

    2016-11-01

    Full Text Available In this paper, we explore the properties of the Channel State Information (CSI of WiFi signals and present a device-free indoor activity recognition system. Our proposed system uses only one ubiquitous router access point and a laptop as a detection point, while the user is free and neither needs to wear sensors nor carry devices. The proposed system recognizes six daily activities, such as walk, crawl, fall, stand, sit, and lie. We have built the prototype with an effective feature extraction method and a fast classification algorithm. The proposed system has been evaluated in a real and complex environment in both line-of-sight (LOS and none-line-of-sight (NLOS scenarios, and the results validate the performance of the proposed system.

  15. Sighting optics including an optical element having a first focal length and a second focal length and methods for sighting

    Science.gov (United States)

    Crandall, David Lynn

    2011-08-16

    Sighting optics include a front sight and a rear sight positioned in a spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus for a user images of the front sight and the target.

  16. Words don't come easy

    DEFF Research Database (Denmark)

    Starrfelt, Randi

    of reading, and with the use of functional imaging techniques. Extant evidence for (and against) cerebral specialization for visual word recognition is briefly reviewed and found inconclusive.                       Study I is a case study of a patient with a very selective alexia and agraphia affecting...... and object processing, may explain the pattern of activations found in our and other functional imaging studies of the visual word form area.                       Study III reports a patient (NN) with pure alexia. NN is not impaired in object recognition, but his deficit(s) affects processing speed...... reading and writing of letters and words but not numbers. This study raised questions of "where" in the cognitive system such a deficit may arise, and whether it can be attributed to a deficit in a system specialized for reading or letter knowledge. The following studies investigated these questions...

  17. Lwati: A Journal of Contemporary Research - Vol 10, No 4 (2013)

    African Journals Online (AJOL)

    Effect of Concentrated Language Encounter Method in Developing Sight Word Recognition Skills in Primary School Pupils in Cross River State · EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. EU Ikwen, 8-18 ...

  18. Young toddlers' word comprehension is flexible and efficient.

    Directory of Open Access Journals (Sweden)

    Elika Bergelson

    Full Text Available Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children's gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers' spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called "cup" or "juice" and pictures that had only one likely name for toddlers (such as "apple", using a visual world eye-tracking task and a picture-labeling task (n = 77, mean age, 21 months. Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents.

  19. Diminutives facilitate word segmentation in natural speech: cross-linguistic evidence.

    Science.gov (United States)

    Kempe, Vera; Brooks, Patricia J; Gillis, Steven; Samson, Graham

    2007-06-01

    Final-syllable invariance is characteristic of diminutives (e.g., doggie), which are a pervasive feature of the child-directed speech registers of many languages. Invariance in word endings has been shown to facilitate word segmentation (Kempe, Brooks, & Gillis, 2005) in an incidental-learning paradigm in which synthesized Dutch pseudonouns were used. To broaden the cross-linguistic evidence for this invariance effect and to increase its ecological validity, adult English speakers (n=276) were exposed to naturally spoken Dutch or Russian pseudonouns presented in sentence contexts. A forced choice test was given to assess target recognition, with foils comprising unfamiliar syllable combinations in Experiments 1 and 2 and syllable combinations straddling word boundaries in Experiment 3. A control group (n=210) received the recognition test with no prior exposure to targets. Recognition performance improved with increasing final-syllable rhyme invariance, with larger increases for the experimental group. This confirms that word ending invariance is a valid segmentation cue in artificial, as well as naturalistic, speech and that diminutives may aid segmentation in a number of languages.

  20. Electronic Control System Of Home Appliances Using Speech Command Words

    Directory of Open Access Journals (Sweden)

    Aye Min Soe

    2015-06-01

    Full Text Available Abstract The main idea of this paper is to develop a speech recognition system. By using this system smart home appliances are controlled by spoken words. The spoken words chosen for recognition are Fan On Fan Off Light On Light Off TV On and TV Off. The input of the system takes speech signals to control home appliances. The proposed system has two main parts speech recognition and smart home appliances electronic control system. Speech recognition is implemented in MATLAB environment. In this process it contains two main modules feature extraction and feature matching. Mel Frequency Cepstral Coefficients MFCC is used for feature extraction. Vector Quantization VQ approach using clustering algorithm is applied for feature matching. In electrical home appliances control system RF module is used to carry command signal from PC to microcontroller wirelessly. Microcontroller is connected to driver circuit for relay and motor. The input commands are recognized very well. The system is a good performance to control home appliances by spoken words.

  1. Working memory for vibrotactile frequencies: comparison of cortical activity in blind and sighted individuals.

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J; Dixit, Sachin

    2010-11-01

    In blind, occipital cortex showed robust activation to nonvisual stimuli in many prior functional neuroimaging studies. The cognitive processes represented by these activations are not fully determined, although a verbal recognition memory role has been demonstrated. In congenitally blind and sighted (10 per group), we contrasted responses to a vibrotactile one-back frequency retention task with 5-s delays and a vibrotactile amplitude-change task; both tasks involved the same vibration parameters. The one-back paradigm required continuous updating for working memory (WM). Findings in both groups confirmed roles in WM for right hemisphere dorsolateral prefrontal (DLPFC) and dorsal/ventral attention components of posterior parietal cortex. Negative findings in bilateral ventrolateral prefrontal cortex suggested task performance without subvocalization. In bilateral occipital cortex, blind showed comparable positive responses to both tasks, whereas WM evoked large negative responses in sighted. Greater utilization of attention resources in blind were suggested as causing larger responses in dorsal and ventral attention systems, right DLPFC, and persistent responses across delays between trials in somatosensory and premotor cortex. In sighted, responses in somatosensory and premotor areas showed iterated peaks matched to stimulation trial intervals. The findings in occipital cortex of blind suggest that tactile activations do not represent cognitive operations for nonverbal WM task. However, these data suggest a role in sensory processing for tactile information in blind that parallels a similar contribution for visual stimuli in occipital cortex of sighted. © 2010 Wiley-Liss, Inc.

  2. Using Recall to Reduce False Recognition: Diagnostic and Disqualifying Monitoring

    Science.gov (United States)

    Gallo, David A.

    2004-01-01

    Whether recall of studied words (e.g., parsley, rosemary, thyme) could reduce false recognition of related lures (e.g., basil) was investigated. Subjects studied words from several categories for a final recognition memory test. Half of the subjects were given standard test instructions, and half were instructed to use recall to reduce false…

  3. Spatial attention in written word perception

    Directory of Open Access Journals (Sweden)

    Veronica eMontani

    2014-02-01

    Full Text Available The role of attention in visual word recognition and reading aloud is a long debated issue. Studies of both developmental and acquired reading disorders provide growing evidence that spatial attention is critically involved in word reading, in particular for the phonological decoding of unfamiliar letter strings. However, studies on healthy participants have produced contrasting results. The aim of this study was to investigate how the allocation of spatial attention may influence the perception of letter strings in skilled readers. High frequency words, low frequency words and pseudowords were briefly and parafoveally presented either in the left or the right visual field. Attentional allocation was modulated by the presentation of a spatial cue before the target string. Accuracy in reporting the target string was modulated by the spatial cue but this effect varied with the type of string. For unfamiliar strings, processing was facilitated when attention was focused on the string location and hindered when it was diverted from the target. This finding is consistent the assumptions of the CDP+ model of reading aloud, as well as with familiarity sensitivity models that argue for a flexible use of attention according with the specific requirements of the string. Moreover, we found that processing of high-frequency words was facilitated by an extra-large focus of attention. The latter result is consistent with the hypothesis that a broad distribution of attention is the default mode during reading of familiar words because it might optimally engage the broad receptive fields of the highest detectors in the hierarchical system for visual word recognition.

  4. The picture superiority effect in a cross-modality recognition task.

    Science.gov (United States)

    Stenbert, G; Radeborg, K; Hedman, L R

    1995-07-01

    Words and pictures were studied and recognition tests given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Experiment 3 added a manipulation of instructions to name studied objects, and Experiment 4 deviated from the others by presenting both picture and word referring to the same object together for study. The results showed that congruence between study and test modalities consistently facilitated recognition. Furthermore, items studied as pictures were more rapidly recognized than were items studied as words. With repeated testing, the second instance was affected by its predecessor, but the facilitating effect of picture-to-word priming exceeded that of word-to-picture priming. The finds suggest a two- stage recognition process, in which the first is based on perceptual familiarity and the second uses semantic links for a retrieval search. Common-code theories that grant privileged access to the semantic code for pictures or, alternatively, dual-code theories that assume mnemonic superiority for the image code are supported by the findings. Explanations of the picture superiority effect as resulting from dual encoding of pictures are not supported by the data.

  5. The influence of talker and foreign-accent variability on spoken word identification.

    Science.gov (United States)

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  6. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Correlation between maximum phonetically balanced word recognition score and pure-tone auditory threshold in elder presbycusis patients over 80 years old.

    Science.gov (United States)

    Deng, Xin-Sheng; Ji, Fei; Yang, Shi-Ming

    2014-02-01

    The maximum phonetically balanced word recognition score (PBmax) showed poor correlation with pure-tone thresholds in presbycusis patients older than 80 years. To study the characteristics of monosyllable recognition in presbycusis patients older than 80 years of age. Thirty presbycusis patients older than 80 years were included as the test group (group 80+). Another 30 patients aged 60-80 years were selected as the control group (group 80-) . PBmax was tested by Mandarin monosyllable recognition test materials with the signal level at 30 dB above the averaged thresholds of 0.5, 1, 2, and 4 kHz (4FA) or the maximum comfortable level. The PBmax values of the test group and control group were compared with each other and the correlation between PBmax and predicted maximum speech recognition scores based on 4FA (PBmax-predict) were statistically analyzed. Under the optimal test conditions, the averaged PBmax was (77.3 ± 16.7) % for group 80- and (52.0 ± 25.4) % for group 80+ (p < 0.001). The PBmax of group 80- was significantly correlated with PBmax-predict (Spearman correlation = 0.715, p < 0.001). The score for group 80+ was less statistically correlated with PBmax-predict (Spearman correlation = 0.572, p = 0.001).

  8. Infants' long-term memory for the sound patterns of words and voices.

    Science.gov (United States)

    Houston, Derek M; Jusczyk, Peter W

    2003-12-01

    Infants' long-term memory for the phonological patterns of words versus the indexical properties of talkers' voices was examined in 3 experiments using the Headturn Preference Procedure (D. G. Kemler Nelson et al., 1995). Infants were familiarized with repetitions of 2 words and tested on the next day for their orientation times to 4 passages--2 of which included the familiarized words. At 7.5 months of age, infants oriented longer to passages containing familiarized words when these were produced by the original talker. At 7.5 and 10.5 months of age, infants did not recognize words in passages produced by a novel female talker. In contrast, 7.5-month-olds demonstrated word recognition in both talker conditions when presented with passages produced by both the original and the novel talker. The findings suggest that talker-specific information can prime infants' memory for words and facilitate word recognition across talkers. ((c) 2003 APA, all rights reserved)

  9. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research

    Directory of Open Access Journals (Sweden)

    Laslo Dinges

    2016-03-01

    Full Text Available Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.

  10. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.

    Science.gov (United States)

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif

    2016-03-11

    Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.

  11. Optimal Viewing Position for Fully Connected and Unconnected words in Arabic

    Directory of Open Access Journals (Sweden)

    Ganayim Deia

    2016-06-01

    Full Text Available In order to assess the unique reading processes in Arabic, given its unique orthographic nature of natural inherent variations of inter letter spacing, the current study examined the extent and influence of connectedness disparity during single word recognition using the optimal viewing position (OVP paradigm (three-, four- and five-letter stimuli presented at a normal reading size, at all possible locations. The initial word viewing position was systematically manipulated by shifting words horizontally relative to an imposed initial viewing position. Variations in recognition and processing time were measured as a function of initial viewing position. Fully connected/unconnected Arabic words were used. It was found that OVP effects occurred during the processing of isolated Arabic words. In Arabic, the OVP may be in the center of the word. No OVP was found in three-letter words; for four- and five-letter words, the OVP effect appeared as a U-shaped curve with a minimum towards the second and third letters. Thus, the OVP effects generalize across structurally different alphabetic scripts.

  12. Emotion Recognition of Weblog Sentences Based on an Ensemble Algorithm of Multi-label Classification and Word Emotions

    Science.gov (United States)

    Li, Ji; Ren, Fuji

    Weblogs have greatly changed the communication ways of mankind. Affective analysis of blog posts is found valuable for many applications such as text-to-speech synthesis or computer-assisted recommendation. Traditional emotion recognition in text based on single-label classification can not satisfy higher requirements of affective computing. In this paper, the automatic identification of sentence emotion in weblogs is modeled as a multi-label text categorization task. Experiments are carried out on 12273 blog sentences from the Chinese emotion corpus Ren_CECps with 8-dimension emotion annotation. An ensemble algorithm RAKEL is used to recognize dominant emotions from the writer's perspective. Our emotion feature using detailed intensity representation for word emotions outperforms the other main features such as the word frequency feature and the traditional lexicon-based feature. In order to deal with relatively complex sentences, we integrate grammatical characteristics of punctuations, disjunctive connectives, modification relations and negation into features. It achieves 13.51% and 12.49% increases for Micro-averaged F1 and Macro-averaged F1 respectively compared to the traditional lexicon-based feature. Result shows that multiple-dimension emotion representation with grammatical features can efficiently classify sentence emotion in a multi-label problem.

  13. Individual differences in language and working memory affect children's speech recognition in noise.

    Science.gov (United States)

    McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-05-01

    We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.

  14. Memory for Pictures, Words, and Spatial Location in Older Adults: Evidence for Pictorial Superiority.

    Science.gov (United States)

    Park, Denise Cortis; And Others

    1983-01-01

    Tested recognition memory for items and spatial location by varying picture and word stimuli across four slide quadrants. Results showed a pictorial superiority effect for item recognition and a greater ability to remember the spatial location of pictures versus words for both old and young adults (N=95). (WAS)

  15. The picture superiority effect in a cross-modality recognition task

    OpenAIRE

    Stenberg, Georg; Radeborg, Karl; Hedman, Leif R.

    1995-01-01

    Words and pictures were studied, and recognition tests were given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Exp...

  16. The picture superiority effect in associative recognition.

    Science.gov (United States)

    Hockley, William E

    2008-10-01

    The picture superiority effect has been well documented in tests of item recognition and recall. The present study shows that the picture superiority effect extends to associative recognition. In three experiments, students studied lists consisting of random pairs of concrete words and pairs of line drawings; then they discriminated between intact (old) and rearranged (new) pairs of words and pictures at test. The discrimination advantage for pictures over words was seen in a greater hit rate for intact picture pairs, but there was no difference in the false alarm rates for the two types of stimuli. That is, there was no mirror effect. The same pattern of results was found when the test pairs consisted of the verbal labels of the pictures shown at study (Experiment 4), indicating that the hit rate advantage for picture pairs represents an encoding benefit. The results have implications for theories of the picture superiority effect and models of associative recognition.

  17. The Influence of the Phonological Neighborhood Clustering Coefficient on Spoken Word Recognition

    Science.gov (United States)

    Chan, Kit Ying; Vitevitch, Michael S.

    2009-01-01

    Clustering coefficient--a measure derived from the new science of networks--refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words "bat", "hat", and "can", all of which are neighbors of the word "cat"; the words "bat" and…

  18. Exploring Individual Differences in Irregular Word Recognition among Children with Early-Emerging and Late-Emerging Word Reading Difficulty

    Science.gov (United States)

    Steacy, Laura M.; Kearns, Devin M.; Gilbert, Jennifer K.; Compton, Donald L.; Cho, Eunsoo; Lindstrom, Esther R.; Collins, Alyson A.

    2017-01-01

    Models of irregular word reading that take into account both child- and word-level predictors have not been evaluated in typically developing children and children with reading difficulty (RD). The purpose of the present study was to model individual differences in irregular word reading ability among 5th grade children (N = 170), oversampled for…

  19. Massive cortical reorganization in sighted Braille readers.

    Science.gov (United States)

    Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin

    2016-03-15

    The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills.

  20. Chinese Learners of English See Chinese Words When Reading English Words.

    Science.gov (United States)

    Ma, Fengyang; Ai, Haiyang

    2018-06-01

    The present study examines when second language (L2) learners read words in the L2, whether the orthography and/or phonology of the translation words in the first language (L1) is activated and whether the patterns would be modulated by the proficiency in the L2. In two experiments, two groups of Chinese learners of English immersed in the L1 environment, one less proficient and the other more proficient in English, performed a translation recognition task. In this task, participants judged whether pairs of words, with an L2 word preceding an L1 word, were translation words or not. The critical conditions compared the performance of learners to reject distractors that were related to the translation word (e.g., , pronounced as /bei 1/) of an L2 word (e.g., cup) in orthography (e.g., , bad in Chinese, pronounced as /huai 4/) or phonology (e.g., , sad in Chinese, pronounced as /bei 1/). Results of Experiment 1 showed less proficient learners were slower and less accurate to reject translation orthography distractors, as compared to unrelated controls, demonstrating a robust translation orthography interference effect. In contrast, their performance was not significantly different when rejecting translation phonology distractors, relative to unrelated controls, showing no translation phonology interference. The same patterns were observed in more proficient learners in Experiment 2. Together, these results suggest that when Chinese learners of English read English words, the orthographic information, but not the phonological information of the Chinese translation words is activated. In addition, this activation is not modulated by L2 proficiency.

  1. Sub-word image clustering in Farsi printed books

    Science.gov (United States)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-02-01

    Most OCR systems are designed for the recognition of a single page. In case of unfamiliar font faces, low quality papers and degraded prints, the performance of these products drops sharply. However, an OCR system can use redundancy of word occurrences in large documents to improve recognition results. In this paper, we propose a sub-word image clustering method for the applications dealing with large printed documents. We assume that the whole document is printed by a unique unknown font with low quality print. Our proposed method finds clusters of equivalent sub-word images with an incremental algorithm. Due to the low print quality, we propose an image matching algorithm for measuring the distance between two sub-word images, based on Hamming distance and the ratio of the area to the perimeter of the connected components. We built a ground-truth dataset of more than 111000 sub-word images to evaluate our method. All of these images were extracted from an old Farsi book. We cluster all of these sub-words, including isolated letters and even punctuation marks. Then all centers of created clusters are labeled manually. We show that all sub-words of the book can be recognized with more than 99.7% accuracy by assigning the label of each cluster center to all of its members.

  2. Output Interference in Recognition Memory

    Science.gov (United States)

    Criss, Amy H.; Malmberg, Kenneth J.; Shiffrin, Richard M.

    2011-01-01

    Dennis and Humphreys (2001) proposed that interference in recognition memory arises solely from the prior contexts of the test word: Interference does not arise from memory traces of other words (from events prior to the study list or on the study list, and regardless of similarity to the test item). We evaluate this model using output…

  3. Nonword Repetition and Vocabulary Knowledge as Predictors of Children's Phonological and Semantic Word Learning.

    Science.gov (United States)

    Adlof, Suzanne M; Patten, Hannah

    2017-03-01

    This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information. Fifty children, with a mean age of 8 years (range 5-12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age. Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall. These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed.

  4. Document image retrieval through word shape coding.

    Science.gov (United States)

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  5. Spatial attention in written word perception.

    Science.gov (United States)

    Montani, Veronica; Facoetti, Andrea; Zorzi, Marco

    2014-01-01

    The role of attention in visual word recognition and reading aloud is a long debated issue. Studies of both developmental and acquired reading disorders provide growing evidence that spatial attention is critically involved in word reading, in particular for the phonological decoding of unfamiliar letter strings. However, studies on healthy participants have produced contrasting results. The aim of this study was to investigate how the allocation of spatial attention may influence the perception of letter strings in skilled readers. High frequency words (HFWs), low frequency words and pseudowords were briefly and parafoveally presented either in the left or the right visual field. Attentional allocation was modulated by the presentation of a spatial cue before the target string. Accuracy in reporting the target string was modulated by the spatial cue but this effect varied with the type of string. For unfamiliar strings, processing was facilitated when attention was focused on the string location and hindered when it was diverted from the target. This finding is consistent the assumptions of the CDP+ model of reading aloud, as well as with familiarity sensitivity models that argue for a flexible use of attention according with the specific requirements of the string. Moreover, we found that processing of HFWs was facilitated by an extra-large focus of attention. The latter result is consistent with the hypothesis that a broad distribution of attention is the default mode during reading of familiar words because it might optimally engage the broad receptive fields of the highest detectors in the hierarchical system for visual word recognition.

  6. Different Neural Correlates of Emotion-Label Words and Emotion-Laden Words: An ERP Study

    Directory of Open Access Journals (Sweden)

    Juan Zhang

    2017-09-01

    Full Text Available It is well-documented that both emotion-label words (e.g., sadness, happiness and emotion-laden words (e.g., death, wedding can induce emotion activation. However, the neural correlates of emotion-label words and emotion-laden words recognition have not been examined. The present study aimed to compare the underlying neural responses when processing the two kinds of words by employing event-related potential (ERP measurements. Fifteen Chinese native speakers were asked to perform a lexical decision task in which they should judge whether a two-character compound stimulus was a real word or not. Results showed that (1 emotion-label words and emotion-laden words elicited similar P100 at the posteriors sites, (2 larger N170 was found for emotion-label words than for emotion-laden words at the occipital sites on the right hemisphere, and (3 negative emotion-label words elicited larger Late Positivity Complex (LPC on the right hemisphere than on the left hemisphere while such effect was not found for emotion-laden words and positive emotion-label words. The results indicate that emotion-label words and emotion-laden words elicit different cortical responses at both early (N170 and late (LPC stages. In addition, right hemisphere advantage for emotion-label words over emotion-laden words can be observed in certain time windows (i.e., N170 and LPC while fails to be detected in some other time window (i.e., P100. The implications of the current findings for future emotion research were discussed.

  7. Different Neural Correlates of Emotion-Label Words and Emotion-Laden Words: An ERP Study.

    Science.gov (United States)

    Zhang, Juan; Wu, Chenggang; Meng, Yaxuan; Yuan, Zhen

    2017-01-01

    It is well-documented that both emotion-label words (e.g., sadness, happiness) and emotion-laden words (e.g., death, wedding) can induce emotion activation. However, the neural correlates of emotion-label words and emotion-laden words recognition have not been examined. The present study aimed to compare the underlying neural responses when processing the two kinds of words by employing event-related potential (ERP) measurements. Fifteen Chinese native speakers were asked to perform a lexical decision task in which they should judge whether a two-character compound stimulus was a real word or not. Results showed that (1) emotion-label words and emotion-laden words elicited similar P100 at the posteriors sites, (2) larger N170 was found for emotion-label words than for emotion-laden words at the occipital sites on the right hemisphere, and (3) negative emotion-label words elicited larger Late Positivity Complex (LPC) on the right hemisphere than on the left hemisphere while such effect was not found for emotion-laden words and positive emotion-label words. The results indicate that emotion-label words and emotion-laden words elicit different cortical responses at both early (N170) and late (LPC) stages. In addition, right hemisphere advantage for emotion-label words over emotion-laden words can be observed in certain time windows (i.e., N170 and LPC) while fails to be detected in some other time window (i.e., P100). The implications of the current findings for future emotion research were discussed.

  8. Recent advances in Automatic Speech Recognition for Vietnamese

    OpenAIRE

    Le , Viet-Bac; Besacier , Laurent; Seng , Sopheap; Bigi , Brigitte; Do , Thi-Ngoc-Diep

    2008-01-01

    International audience; This paper presents our recent activities for automatic speech recognition for Vietnamese. First, our text data collection and processing methods and tools are described. For language modeling, we investigate word, sub-word and also hybrid word/sub-word models. For acoustic modeling, when only limited speech data are available for Vietnamese, we propose some crosslingual acoustic modeling techniques. Furthermore, since the use of sub-word units can reduce the high out-...

  9. The ties that bind what is known to the recognition of what is new.

    Science.gov (United States)

    Nelson, D L; Zhang, N; McKinney, V M

    2001-09-01

    Recognition success varies with how information is encoded (e.g., level of processing) and with what is already known as a result of past learning (e.g., word frequency). This article presents the results of experiments showing that preexisting connections involving the associates of studied words facilitate their recognition regardless of whether the words are intentionally encoded or are incidentally encoded under semantic or nonsemantic conditions. Words are more likely to be recognized when they have either more resonant connections coming back to them from their associates or more connections among their associates. Such results occur even though attention is never drawn to these associates. Regression analyses showed that these connections affect recognition independently of frequency, so the present results add to the literature showing that prior lexical knowledge contributes to episodic recognition. In addition, equations that use free-association data to derive composite strength indices of resonance and connectivity were evaluated. Implications for theories of recognition are discussed.

  10. Deep learning with word embeddings improves biomedical named entity recognition.

    Science.gov (United States)

    Habibi, Maryam; Weber, Leon; Neves, Mariana; Wiegandt, David Luis; Leser, Ulf

    2017-07-15

    Text mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult. We show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall. The source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ . habibima@informatik.hu-berlin.de. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. Individual differences in language and working memory affect children’s speech recognition in noise

    Science.gov (United States)

    McCreery, Ryan W.; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-01-01

    Objective We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. Design As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Study sample Ninety-six children with normal hearing, who were between 5 and 12 years of age. Results Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Conclusions Working memory and language both influence children’s speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child’s auditory skills, consistent with the Ease of Language Understanding model. PMID:27981855

  12. Can false memory for critical lures occur without conscious awareness of list words?

    Science.gov (United States)

    Sadler, Daniel D; Sodmont, Sharon M; Keefer, Lucas A

    2018-02-01

    We examined whether the DRM false memory effect can occur when list words are presented below the perceptual identification threshold. In four experiments, subjects showed robust veridical memory for studied words and false memory for critical lures when masked list words were presented at exposure durations of 43 ms per word. Shortening the exposure duration to 29 ms virtually eliminated veridical recognition of studied words and completely eliminated false recognition of critical lures. Subjective visibility ratings in Experiments 3a and 3b support the assumption that words presented at 29 ms were subliminal for most participants, but were occasionally experienced with partial awareness by participants with higher perceptual awareness. Our results indicate that a false memory effect does not occur in the absence of conscious awareness of list words, but it does occur when word stimuli are presented at an intermediate level of visibility. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. FN400 and LPC memory effects for concrete and abstract words.

    Science.gov (United States)

    Stróżak, Paweł; Bird, Christopher W; Corby, Krystin; Frishkoff, Gwen; Curran, Tim

    2016-11-01

    According to dual-process models, recognition memory depends on two neurocognitive mechanisms: familiarity, which has been linked to the frontal N400 (FN400) effect in studies using ERPs, and recollection, which is reflected by changes in the late positive complex (LPC). Recently, there has been some debate over the relationship between FN400 familiarity effects and N400 semantic effects. According to one view, these effects are one and the same. Proponents of this view have suggested that the frontal distribution of the FN400 could be due to stimulus concreteness: recognition memory experiments commonly use highly imageable or concrete words (or pictures), which elicit semantic ERPs with a frontal distribution. In the present study, we tested this claim using a recognition memory paradigm in which subjects memorized concrete and abstract nouns; half of the words changed font color between study and test. FN400 and LPC old/new effects were observed for abstract as well as concrete words, and were stronger over right hemisphere electrodes for concrete words. However, there was no difference in anteriority of the FN400 effect for the two word types. These findings challenge the notion that the frontal distribution of the FN400 old/new effect is fully explained by stimulus concreteness. © 2016 Society for Psychophysiological Research.

  14. FN400 and LPC memory effects for concrete and abstract words

    Science.gov (United States)

    Stróżak, Paweł; Bird, Christopher W.; Corby, Krystin; Frishkoff, Gwen; Curran, Tim

    2016-01-01

    According to dual-process models, recognition memory depends on two neurocognitive mechanisms: familiarity, which has been linked to the "frontal N400" (FN400) effect in studies using event-related potentials (ERPs), and recollection, which is reflected by changes in the late positive complex (LPC). Recently, there has been some debate over the relationship between FN400 familiarity effects and N400 semantic effects. According to one view, these effects are one and the same. Proponents of this view have suggested that the frontal distribution of the FN400 could be due to stimulus concreteness: recognition memory experiments commonly use highly imageable or concrete words (or pictures), which elicit semantic ERPs with a frontal distribution. In the present study we tested this claim using a recognition memory paradigm in which subjects memorized concrete and abstract nouns; half of the words changed font color between study and test. FN400 and LPC old/new effects were observed for abstract, as well as concrete words, and were stronger over right hemisphere electrodes for concrete words. However, there was no difference in anteriority of the FN400 effect for the two word types. These findings challenge the notion that the frontal distribution of the FN400 old/new effect is fully explained by stimulus concreteness. PMID:27463978

  15. Drifting through Basic Subprocesses of Reading: A Hierarchical Diffusion Model Analysis of Age Effects on Visual Word Recognition.

    Science.gov (United States)

    Froehlich, Eva; Liebig, Johanna; Ziegler, Johannes C; Braun, Mario; Lindenberger, Ulman; Heekeren, Hauke R; Jacobs, Arthur M

    2016-01-01

    Reading is one of the most popular leisure activities and it is routinely performed by most individuals even in old age. Successful reading enables older people to master and actively participate in everyday life and maintain functional independence. Yet, reading comprises a multitude of subprocesses and it is undoubtedly one of the most complex accomplishments of the human brain. Not surprisingly, findings of age-related effects on word recognition and reading have been partly contradictory and are often confined to only one of four central reading subprocesses, i.e., sublexical, orthographic, phonological and lexico-semantic processing. The aim of the present study was therefore to systematically investigate the impact of age on each of these subprocesses. A total of 1,807 participants (young, N = 384; old, N = 1,423) performed four decision tasks specifically designed to tap one of the subprocesses. To account for the behavioral heterogeneity in older adults, this subsample was split into high and low performing readers. Data were analyzed using a hierarchical diffusion modeling approach, which provides more information than standard response time/accuracy analyses. Taking into account incorrect and correct response times, their distributions and accuracy data, hierarchical diffusion modeling allowed us to differentiate between age-related changes in decision threshold, non-decision time and the speed of information uptake. We observed longer non-decision times for older adults and a more conservative decision threshold. More importantly, high-performing older readers outperformed younger adults at the speed of information uptake in orthographic and lexico-semantic processing, whereas a general age-disadvantage was observed at the sublexical and phonological levels. Low-performing older readers were slowest in information uptake in all four subprocesses. Discussing these results in terms of computational models of word recognition, we propose age

  16. Effect of word familiarity on visually evoked magnetic fields.

    Science.gov (United States)

    Harada, N; Iwaki, S; Nakagawa, S; Yamaguchi, M; Tonoike, M

    2004-11-30

    This study investigated the effect of word familiarity of visual stimuli on the word recognizing function of the human brain. Word familiarity is an index of the relative ease of word perception, and is characterized by facilitation and accuracy on word recognition. We studied the effect of word familiarity, using "Hiragana" (phonetic characters in Japanese orthography) characters as visual stimuli, on the elicitation of visually evoked magnetic fields with a word-naming task. The words were selected from a database of lexical properties of Japanese. The four "Hiragana" characters used were grouped and presented in 4 classes of degree of familiarity. The three components were observed in averaged waveforms of the root mean square (RMS) value on latencies at about 100 ms, 150 ms and 220 ms. The RMS value of the 220 ms component showed a significant positive correlation (F=(3/36); 5.501; p=0.035) with the value of familiarity. ECDs of the 220 ms component were observed in the intraparietal sulcus (IPS). Increments in the RMS value of the 220 ms component, which might reflect ideographical word recognition, retrieving "as a whole" were enhanced with increments of the value of familiarity. The interaction of characters, which increased with the value of familiarity, might function "as a large symbol"; and enhance a "pop-out" function with an escaping character inhibiting other characters and enhancing the segmentation of the character (as a figure) from the ground.

  17. The development of word recognition, sentence comprehension, word spelling, and vocabulary in children with deafness: a longitudinal study.

    Science.gov (United States)

    Colin, S; Leybaert, J; Ecalle, J; Magnan, A

    2013-05-01

    Only a small number of longitudinal studies have been conducted to assess the literacy skills of children with hearing impairment. The results of these studies are inconsistent with regard to the importance of phonology in reading acquisition as is the case in studies with hearing children. Colin, Magnan, Ecalle, and Leybaert (2007) revealed the important role of early phonological skills and the contribution of the factor of age of exposure to Cued Speech (CS: a manual system intended to resolve the ambiguities inherent to speechreading) to subsequent reading acquisition (from kindergarten to first grade) in children with deafness. The aim of the present paper is twofold: (1) to confirm the role of early exposure to CS in the development of the linguistic skills necessary in order to learn reading and writing in second grade; (2) to reveal the possible existence of common factors other than CS that may influence literacy performances and explain the inter-individual difference within groups of children with hearing impairment. Eighteen 6-year-old hearing-impaired and 18 hearing children of the same chronological age were tested from kindergarten to second grade. The children with deafness had either been exposed to CS at an early age, at home and before kindergarten (early-CS group), or had first been exposed to it when they entered kindergarten (late-CS group) or first grade (beginner-CS group). Children were given implicit and explicit phonological tasks, silent reading tasks (word recognition and sentence comprehension), word spelling, and vocabulary tasks. Children in the early-CS group outperformed those of the late-CS and beginner-CS groups in phonological tasks from first grade to second grade. They became better readers and better spellers than those from the late-CS group and the beginner-CS group. Their performances did not differ from those of hearing children in any of the tasks except for the receptive vocabulary test. Thus early exposure to CS seems

  18. Directed forgetting: Comparing pictures and words.

    Science.gov (United States)

    Quinlan, Chelsea K; Taylor, Tracy L; Fawcett, Jonathan M

    2010-03-01

    The authors investigated directed forgetting as a function of the stimulus type (picture, word) presented at study and test. In an item-method directed forgetting task, study items were presented 1 at a time, each followed with equal probability by an instruction to remember or forget. Participants exhibited greater yes-no recognition of remember than forget items for each of the 4 study-test conditions (picture-picture, picture-word, word-word, word-picture). However, this difference was significantly smaller when pictures were studied than when words were studied. This finding demonstrates that the magnitude of the directed forgetting effect can be reduced by high item memorability, such as when the picture superiority effect is operating. This suggests caution in using pictures at study when the goal of an experiment is to examine potential group differences in the magnitude of the directed forgetting effect. 2010 APA, all rights reserved.

  19. Translation Ambiguity but Not Word Class Predicts Translation Performance

    Science.gov (United States)

    Prior, Anat; Kroll, Judith F.; Macwhinney, Brian

    2013-01-01

    We investigated the influence of word class and translation ambiguity on cross-linguistic representation and processing. Bilingual speakers of English and Spanish performed translation production and translation recognition tasks on nouns and verbs in both languages. Words either had a single translation or more than one translation. Translation…

  20. Sea turtles sightings in North Carolina

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea turtles sightings are reported to the NMFS Beaufort Laboratory sea turtle program by the general public as they are fishing, boating, etc. These sightings...

  1. Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.

    Science.gov (United States)

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.

  2. Progressive-Search Algorithms for Large-Vocabulary Speech Recognition

    National Research Council Canada - National Science Library

    Murveit, Hy; Butzberger, John; Digalakis, Vassilios; Weintraub, Mitch

    1993-01-01

    .... An algorithm, the "Forward-Backward Word-Life Algorithm," is described. It can generate a word lattice in a progressive search that would be used as a language model embedded in a succeeding recognition pass to reduce computation requirements...

  3. Recall and recognition hypermnesia for Socratic stimuli.

    Science.gov (United States)

    Kazén, Miguel; Solís-Macías, Víctor M

    2016-01-01

    In two experiments, we investigate hypermnesia, net memory improvements with repeated testing of the same material after a single study trial. In the first experiment, we found hypermnesia across three trials for the recall of word solutions to Socratic stimuli (dictionary-like definitions of concepts) replicating Erdelyi, Buschke, and Finkelstein and, for the first time using these materials, for their recognition. In the second experiment, we had two "yes/no" recognition groups, a Socratic stimuli group presented with concrete and abstract verbal materials and a word-only control group. Using signal detection measures, we found hypermnesia for concrete Socratic stimuli-and stable performance for abstract stimuli across three recognition tests. The control group showed memory decrements across tests. We interpret these findings with the alternative retrieval pathways (ARP) hypothesis, contrasting it with alternative theories of hypermnesia, such as depth of processing, generation and retrieve-recognise. We conclude that recognition hypermnesia for concrete Socratic stimuli is a reliable phenomenon, which we found in two experiments involving both forced-choice and yes/no recognition procedures.

  4. No effect of stress on false recognition.

    Science.gov (United States)

    Beato, María Soledad; Cadavid, Sara; Pulido, Ramón F; Pinho, María Salomé

    2013-02-01

    The present study aimed to analyze the effect of acute stress on false recognition in the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, lists of words associated with a non-presented critical lure are studied and, in a subsequent memory test, critical lures are often falsely remembered. In two experiments, participants were randomly assigned to either the stress group (Trier Social Stress Test) or the no-stress control group. Because we sought to control the level-of-processing at encoding, in Experiment 1, participants created a visual mental image for each presented word (deep encoding). In Experiment 2, participants performed a shallow encoding (to respond whether each word contained the letter "o"). The results indicated that, in both experiments, as predicted, heart rate and STAI-S scores increased only in the stress group. However, false recognition did not differ across stress and no-stress groups. Results suggest that, although psychosocial stress was successfully induced, it does not enhance the vulnerability of individuals with acute stress to DRM false recognition, regardless of the level of processing.

  5. Effects of modality and repetition in a continuous recognition memory task: Repetition has no effect on auditory recognition memory.

    Science.gov (United States)

    Amir Kassim, Azlina; Rehman, Rehan; Price, Jessica M

    2018-04-01

    Previous research has shown that auditory recognition memory is poorer compared to visual and cross-modal (visual and auditory) recognition memory. The effect of repetition on memory has been robust in showing improved performance. It is not clear, however, how auditory recognition memory compares to visual and cross-modal recognition memory following repetition. Participants performed a recognition memory task, making old/new discriminations to new stimuli, stimuli repeated for the first time after 4-7 intervening items (R1), or repeated for the second time after 36-39 intervening items (R2). Depending on the condition, participants were either exposed to visual stimuli (2D line drawings), auditory stimuli (spoken words), or cross-modal stimuli (pairs of images and associated spoken words). Results showed that unlike participants in the visual and cross-modal conditions, participants in the auditory recognition did not show improvements in performance on R2 trials compared to R1 trials. These findings have implications for pedagogical techniques in education, as well as for interventions and exercises aimed at boosting memory performance. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Does Kaniso activate CASINO?: input coding schemes and phonology in visual-word recognition.

    Science.gov (United States)

    Acha, Joana; Perea, Manuel

    2010-01-01

    Most recent input coding schemes in visual-word recognition assume that letter position coding is orthographic rather than phonological in nature (e.g., SOLAR, open-bigram, SERIOL, and overlap). This assumption has been drawn - in part - by the fact that the transposed-letter effect (e.g., caniso activates CASINO) seems to be (mostly) insensitive to phonological manipulations (e.g., Perea & Carreiras, 2006, 2008; Perea & Pérez, 2009). However, one could argue that the lack of a phonological effect in prior research was due to the fact that the manipulation always occurred in internal letter positions - note that phonological effects tend to be stronger for the initial syllable (Carreiras, Ferrand, Grainger, & Perea, 2005). To reexamine this issue, we conducted a masked priming lexical decision experiment in which we compared the priming effect for transposed-letter pairs (e.g., caniso-CASINO vs. caviro-CASINO) and for pseudohomophone transposed-letter pairs (kaniso-CASINO vs. kaviro-CASINO). Results showed a transposed-letter priming effect for the correctly spelled pairs, but not for the pseudohomophone pairs. This is consistent with the view that letter position coding is (primarily) orthographic in nature.

  7. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    Science.gov (United States)

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  8. Using complex networks to quantify consistency in the use of words

    International Nuclear Information System (INIS)

    Amancio, D R; Oliveira Jr, O N; Costa, L da F

    2012-01-01

    In this paper we have quantified the consistency of word usage in written texts represented by complex networks, where words were taken as nodes, by measuring the degree of preservation of the node neighborhood. Words were considered highly consistent if the authors used them with the same neighborhood. When ranked according to the consistency of use, the words obeyed a log-normal distribution, in contrast to Zipf's law that applies to the frequency of use. Consistency correlated positively with the familiarity and frequency of use, and negatively with ambiguity and age of acquisition. An inspection of some highly consistent words confirmed that they are used in very limited semantic contexts. A comparison of consistency indices for eight authors indicated that these indices may be employed for author recognition. Indeed, as expected, authors of novels could be distinguished from those who wrote scientific texts. Our analysis demonstrated the suitability of the consistency indices, which can now be applied in other tasks, such as emotion recognition

  9. Optical character recognition of handwritten Arabic using hidden Markov models

    Science.gov (United States)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.

    2011-04-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  10. Culture/Religion and Identity: Social Justice versus Recognition

    Science.gov (United States)

    Bekerman, Zvi

    2012-01-01

    Recognition is the main word attached to multicultural perspectives. The multicultural call for recognition, the one calling for the recognition of cultural minorities and identities, the one now voiced by liberal states all over and also in Israel was a more difficult one. It took the author some time to realize that calling for the recognition…

  11. Encoding in the visual word form area: an fMRI adaptation study of words versus handwriting.

    Science.gov (United States)

    Barton, Jason J S; Fox, Christopher J; Sekunova, Alla; Iaria, Giuseppe

    2010-08-01

    Written texts are not just words but complex multidimensional stimuli, including aspects such as case, font, and handwriting style, for example. Neuropsychological reports suggest that left fusiform lesions can impair the reading of text for word (lexical) content, being associated with alexia, whereas right-sided lesions may impair handwriting recognition. We used fMRI adaptation in 13 healthy participants to determine if repetition-suppression occurred for words but not handwriting in the left visual word form area (VWFA) and the reverse in the right fusiform gyrus. Contrary to these expectations, we found adaptation for handwriting but not for words in both the left VWFA and the right VWFA homologue. A trend to adaptation for words but not handwriting was seen only in the left middle temporal gyrus. An analysis of anterior and posterior subdivisions of the left VWFA also failed to show any adaptation for words. We conclude that the right and the left fusiform gyri show similar patterns of adaptation for handwriting, consistent with a predominantly perceptual contribution to text processing.

  12. Acute effects of triazolam on false recognition.

    Science.gov (United States)

    Mintzer, M Z; Griffiths, R R

    2000-12-01

    Neuropsychological, neuroimaging, and electrophysiological techniques have been applied to the study of false recognition; however, psychopharmacological techniques have not been applied. Benzodiazepine sedative/anxiolytic drugs produce memory deficits similar to those observed in organic amnesia and may be useful tools for studying normal and abnormal memory mechanisms. The present double-blind, placebo-controlled repeated measures study examined the acute effects of orally administered triazolam (Halcion; 0.125 and 0.25 mg/70 kg), a benzodiazepine hypnotic, on performance in the Deese (1959)/Roediger-McDermott (1995) false recognition paradigm in 24 healthy volunteers. Paralleling previous demonstrations in amnesic patients, triazolam produced significant dose-related reductions in false recognition rates to nonstudied words associatively related to studied words, suggesting that false recognition relies on normal memory mechanisms impaired in benzodiazepine-induced amnesia. The results also suggested that relative to placebo, triazolam reduced participants' reliance on memory for item-specific versus list-common semantic information and reduced participants' use of remember versus know responses.

  13. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    Science.gov (United States)

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  14. Memory Asymmetry of Forward and Backward Associations in Recognition Tasks

    Science.gov (United States)

    Yang, Jiongjiong; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han

    2013-01-01

    There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiment 1–2) or pairs (Experiment 3–6) during the study phase. They then recalled the word by a cue during a cued recall task (Experiment 1–4), and judged whether the presented two words were in the same or in a different order compared to the study phase during a recognition task (Experiment 1–6). To control for perceptual matching between the study and test phase, participants were presented with vertical test pairs when they made directional judgment in Experiment 5. In Experiment 6, participants also made associative recognition judgments for word pairs presented at the same or the reversed position. The results showed that forward associations were recalled at similar levels as backward associations, and that the correlations between forward and backward associations were high in the cued recall tasks. On the other hand, the direction of forward associations was recognized more accurately (and more quickly) than backward associations, and their correlations were comparable to the control condition in the recognition tasks. This forward advantage was also obtained for the associative recognition task. Diminishing positional information did not change the pattern of associative asymmetry. These results suggest that associative asymmetry is modulated by cued recall and recognition manipulations, and that direction as a constituent part of a memory trace can facilitate associative memory. PMID:22924326

  15. Supine posture affects cortical plasticity in elderly but not young women during a word learning-recognition task.

    Science.gov (United States)

    Spironelli, Chiara; Angrilli, Alessandro

    2017-07-01

    The present research investigated the hypothesis that elderly and horizontal body position contribute to impair learning capacity. To this aim, 30 young (mean age: 23.2 years) and 20 elderly women (mean age: 82.8 years) were split in two equal groups, one assigned to the Seated Position (SP), and the other to the horizontal Bed Rest position (hBR). In the Learning Phase, participants were shown 60 words randomly distributed, and in the subsequent Recognition Phase they had to recognize them mixed with a sample of 60 new words. Behavioral analyses showed age-group effects, with young women exhibiting faster response times and higher accuracy rates than elderly women, but no interaction of body position with age group was found. Analysis of the RP component (250-270ms) revealed greater negativity in the left Occipital gyrus/Cuneus of both sitting age-groups, but significantly left-lateralized RP in left Lingual gyrus only in young bedridden women. Elderly hBR women showed a lack of left RP lateralization, the main generator being located in the right Cuneus. Young participants had the typical old/new effect (450-800ms) in different portions of left Frontal gyri/Uncus, whereas elderly women showed no differences in stimulus processing and its location. EEG alpha activity analyzed during a 3min resting state, soon after the recognition task, revealed greater alpha amplitude (i.e., cortical inhibition) in posterior sites of hBR elderly women, a result in line with their inhibited posterior RP. In elderly women the left asymmetry of RP was positively correlated with both greater accuracy and faster responses, thus pointing to a dysfunctional role, rather than a compensatory shift, of the observed right RP asymmetry in this group. This finding may have important clinical implications, with particular regard to the long-term side-effects of forced Bed Rest on elderly patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Inattentional blindness for ignored words: comparison of explicit and implicit memory tasks.

    Science.gov (United States)

    Butler, Beverly C; Klein, Raymond

    2009-09-01

    Inattentional blindness is described as the failure to perceive a supra-threshold stimulus when attention is directed away from that stimulus. Based on performance on an explicit recognition memory test and concurrent functional imaging data Rees, Russell, Frith, and Driver [Rees, G., Russell, C., Frith, C. D., & Driver, J. (1999). Inattentional blindness versus inattentional amnesia for fixated but ignored words. Science, 286, 2504-2507] reported inattentional blindness for word stimuli that were fixated but ignored. The present study examined both explicit and implicit memory for fixated but ignored words using a selective-attention task in which overlapping picture/word stimuli were presented at fixation. No explicit awareness of the unattended words was apparent on a recognition memory test. Analysis of an implicit memory task, however, indicated that unattended words were perceived at a perceptual level. Thus, the selective-attention task did not result in perfect filtering as suggested by Rees et al. While there was no evidence of conscious perception, subjects were not blind to the implicit perceptual properties of fixated but ignored words.

  17. Microwave line of sight link engineering

    CERN Document Server

    Angueira, Pablo

    2012-01-01

    A comprehensive guide to the design, implementation, and operation of line of sight microwave link systems The microwave Line of Sight (LOS) transport network of any cellular operator requires at least as much planning effort as the cellular infrastructure itself. The knowledge behind this design has been kept private by most companies and has not been easy to find. Microwave Line of Sight Link Engineering solves this dilemma. It provides the latest revisions to ITU reports and recommendations, which are not only key to successful design but have changed dramatically in

  18. Talker and background noise specificity in spoken word recognition memory

    OpenAIRE

    Cooper, Angela; Bradlow, Ann R.

    2017-01-01

    Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...

  19. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  20. Learning during Processing: Word Learning Doesn't Wait for Word Recognition to Finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed…