WorldWideScience

Sample records for multimodal picture word

  1. Does "a picture is worth 1000 words" apply to iconic Chinese words? Relationship of Chinese words and pictures.

    Science.gov (United States)

    Lo, Shih-Yu; Yeh, Su-Ling

    2018-05-29

    The meaning of a picture can be extracted rapidly, but the form-to-meaning relationship is less obvious for printed words. In contrast to English words that follow grapheme-to-phoneme correspondence rule, the iconic nature of Chinese words might predispose them to activate their semantic representations more directly from their orthographies. By using the paradigm of repetition blindness (RB) that taps into the early level of word processing, we examined whether Chinese words activate their semantic representations as directly as pictures do. RB refers to the failure to detect the second occurrence of an item when it is presented twice in temporal proximity. Previous studies showed RB for semantically related pictures, suggesting that pictures activate their semantic representations directly from their shapes and thus two semantically related pictures are represented as repeated. However, this does not apply to English words since no RB was found for English synonyms. In this study, we replicated the semantic RB effect for pictures, and further showed the absence of semantic RB for Chinese synonyms. Based on our findings, it is suggested that Chinese words are processed like English words, which do not activate their semantic representations as directly as pictures do.

  2. Directed forgetting: Comparing pictures and words.

    Science.gov (United States)

    Quinlan, Chelsea K; Taylor, Tracy L; Fawcett, Jonathan M

    2010-03-01

    The authors investigated directed forgetting as a function of the stimulus type (picture, word) presented at study and test. In an item-method directed forgetting task, study items were presented 1 at a time, each followed with equal probability by an instruction to remember or forget. Participants exhibited greater yes-no recognition of remember than forget items for each of the 4 study-test conditions (picture-picture, picture-word, word-word, word-picture). However, this difference was significantly smaller when pictures were studied than when words were studied. This finding demonstrates that the magnitude of the directed forgetting effect can be reduced by high item memorability, such as when the picture superiority effect is operating. This suggests caution in using pictures at study when the goal of an experiment is to examine potential group differences in the magnitude of the directed forgetting effect. 2010 APA, all rights reserved.

  3. Pictures Improve Memory of SAT Vocabulary Words.

    Science.gov (United States)

    Price, Melva; Finkelstein, Arleen

    1994-01-01

    Suggests that students can improve their memory of Scholastic Aptitude Test vocabulary words by associating the words with corresponding pictures taken from magazines. Finds that long-term recall of words associated with pictures was higher than recall of words not associated with pictures. (RS)

  4. Tracing the time course of picture--word processing.

    Science.gov (United States)

    Smith, M C; Magee, L E

    1980-12-01

    A number of independent lines of research have suggested that semantic and articulatory information become available differentially from pictures and words. The first of the experiments reported here sought to clarify the time course by which information about pictures and words becomes available by considering the pattern of interference generated when incongruent pictures and words are presented simultaneously in a Stroop-like situation. Previous investigators report that picture naming is easily disrupted by the presence of a distracting word but that word naming is relatively immune to interference from an incongruent picture. Under the assumption that information available from a completed process may disrupt an ongoing process, these results suggest that words access articulatory information more rapidly than do pictures. Experiment 1 extended this paradigm by requiring subjects to verify the category of the target stimulus. In accordance with the hypothesis that picture access the semantic code more rapidly than words, there was a reversal in the interference pattern: Word categorization suffered considerable disruption, whereas picture categorization was minimally affected by the presence of an incongruent word. Experiment 2 sought to further test the hypothesis that access to semantic and articulatory codes is different for pictures and words by examining memory for those items following naming or categorization. Categorized words were better recognized than named words, whereas the reverse was true for pictures, a result which suggests that picture naming involves more extensive processing than picture categorization. Experiment 3 replicated this result under conditions in which viewing time was held constant. The last experiment extended the investigation of memory differences to a situation in which subjects were required to generate the superordinate category name. Here, memory for categorized pictures was as good as memory for named pictures. Category

  5. Distance-dependent processing of pictures and words.

    Science.gov (United States)

    Amit, Elinor; Algom, Daniel; Trope, Yaacov

    2009-08-01

    A series of 8 experiments investigated the association between pictorial and verbal representations and the psychological distance of the referent objects from the observer. The results showed that people better process pictures that represent proximal objects and words that represent distal objects than pictures that represent distal objects and words that represent proximal objects. These results were obtained with various psychological distance dimensions (spatial, temporal, and social), different tasks (classification and categorization), and different measures (speed of processing and selective attention). The authors argue that differences in the processing of pictures and words emanate from the physical similarity of pictures, but not words, to the referents. Consequently, perceptual analysis is commonly applied to pictures but not to words. Pictures thus impart a sense of closeness to the referent objects and are preferably used to represent such objects, whereas words do not convey proximity and are preferably used to represent distal objects in space, time, and social perspective.

  6. Different Loci of Semantic Interference in Picture Naming vs. Word-Picture Matching Tasks

    OpenAIRE

    Harvey, Denise Y.; Schnur, Tatiana T.

    2016-01-01

    Naming pictures and matching words to pictures belonging to the same semantic category impairs performance relative to when stimuli come from different semantic categories (i.e., semantic interference). Despite similar semantic interference phenomena in both picture naming and word-picture matching tasks, the locus of interference has been attributed to different levels of the language system – lexical in naming and semantic in word-picture matching. Although both tasks involve access to shar...

  7. Different Loci of Semantic Interference in Picture Naming vs. Word-Picture Matching Tasks.

    Science.gov (United States)

    Harvey, Denise Y; Schnur, Tatiana T

    2016-01-01

    Naming pictures and matching words to pictures belonging to the same semantic category impairs performance relative to when stimuli come from different semantic categories (i.e., semantic interference). Despite similar semantic interference phenomena in both picture naming and word-picture matching tasks, the locus of interference has been attributed to different levels of the language system - lexical in naming and semantic in word-picture matching. Although both tasks involve access to shared semantic representations, the extent to which interference originates and/or has its locus at a shared level remains unclear, as these effects are often investigated in isolation. We manipulated semantic context in cyclical picture naming and word-picture matching tasks, and tested whether factors tapping semantic-level (generalization of interference to novel category items) and lexical-level processes (interactions with lexical frequency) affected the magnitude of interference, while also assessing whether interference occurs at a shared processing level(s) (transfer of interference across tasks). We found that semantic interference in naming was sensitive to both semantic- and lexical-level processes (i.e., larger interference for novel vs. old and low- vs. high-frequency stimuli), consistent with a semantically mediated lexical locus. Interference in word-picture matching exhibited stable interference for old and novel stimuli and did not interact with lexical frequency. Further, interference transferred from word-picture matching to naming. Together, these experiments provide evidence to suggest that semantic interference in both tasks originates at a shared processing stage (presumably at the semantic level), but that it exerts its effect at different loci when naming pictures vs. matching words to pictures.

  8. The blocked-random effect in pictures and words.

    Science.gov (United States)

    Toglia, M P; Hinman, P J; Dayton, B S; Catalano, J F

    1997-06-01

    Picture and word recall was examined in conjunction with list organization. 60 subjects studied a list of 30 items, either words or their pictorial equivalents. The 30 words/pictures, members of five conceptual categories, each represented by six exemplars, were presented either blocked by category or in a random order. While pictures were recalled better than words and a standard blocked-random effect was observed, the interaction indicated that the recall advantage of a blocked presentation was restricted to the word lists. A similar pattern emerged for clustering. These findings are discussed in terms of limitations upon the pictorial superiority effect.

  9. Attention and Gaze Control in Picture Naming, Word Reading, and Word Categorizing

    Science.gov (United States)

    Roelofs, Ardi

    2007-01-01

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture-word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to…

  10. Acoustic and semantic interference effects in words and pictures.

    Science.gov (United States)

    Dhawan, M; Pellegrino, J W

    1977-05-01

    Interference effects for pictures and words were investigated using a probe-recall task. Word stimuli showed acoustic interference effects for items at the end of the list and semantic interference effects for items at the beginning of the list, similar to results of Kintsch and Buschke (1969). Picture stimuli showed large semantic interference effects at all list positions with smaller acoustic interference effects. The results were related to latency data on picture-word processing and interpreted in terms of the differential order, probability, and/or speed of access to acoustic and semantic levels of processing. A levels of processing explanation of picture-word retention differences was related to dual coding theory. Both theoretical positions converge on an explanation of picture-word retention differences as a function of the relative capacity for semantic or associative processing.

  11. Conceptual control across modalities: graded specialisation for pictures and words in inferior frontal and posterior temporal cortex.

    Science.gov (United States)

    Krieger-Redwood, Katya; Teige, Catarina; Davey, James; Hymers, Mark; Jefferies, Elizabeth

    2015-09-01

    Controlled semantic retrieval to words elicits co-activation of inferior frontal (IFG) and left posterior temporal cortex (pMTG), but research has not yet established (i) the distinct contributions of these regions or (ii) whether the same processes are recruited for non-verbal stimuli. Words have relatively flexible meanings - as a consequence, identifying the context that links two specific words is relatively demanding. In contrast, pictures are richer stimuli and their precise meaning is better specified by their visible features - however, not all of these features will be relevant to uncovering a given association, tapping selection/inhibition processes. To explore potential differences across modalities, we took a commonly-used manipulation of controlled retrieval demands, namely the identification of weak vs. strong associations, and compared word and picture versions. There were 4 key findings: (1) Regions of interest (ROIs) in posterior IFG (BA44) showed graded effects of modality (e.g., words>pictures in left BA44; pictures>words in right BA44). (2) An equivalent response was observed in left mid-IFG (BA45) across modalities, consistent with the multimodal semantic control deficits that typically follow LIFG lesions. (3) The anterior IFG (BA47) ROI showed a stronger response to verbal than pictorial associations, potentially reflecting a role for this region in establishing a meaningful context that can be used to direct semantic retrieval. (4) The left pMTG ROI also responded to difficulty across modalities yet showed a stronger response overall to verbal stimuli, helping to reconcile two distinct literatures that have implicated this site in semantic control and lexical-semantic access respectively. We propose that left anterior IFG and pMTG work together to maintain a meaningful context that shapes ongoing semantic processing, and that this process is more strongly taxed by word than picture associations. Copyright © 2015 The Authors. Published by

  12. Effects of Word Recognition Training in a Picture-Word Interference Task: Automaticity vs. Speed.

    Science.gov (United States)

    Ehri, Linnea C.

    First and second graders were taught to recognize a set of written words either more accurately or more rapidly. Both before and after word training, they named pictures printed with and without these words as distractors. Of interest was whether training would enhance or diminish the interference created by these words in the picture naming task.…

  13. Interference Effects on the Recall of Pictures, Printed Words and Spoken Words.

    Science.gov (United States)

    Burton, John K.; Bruning, Roger H.

    Thirty college undergraduates participated in a study of the effects of acoustic and visual interference on the recall of word and picture triads in both short-term and long-term memory. The subjects were presented 24 triads of monosyllabic nouns representing all of the possible combinations of presentation types: pictures, printed words, and…

  14. [French norms of imagery for pictures, for concrete and abstract words].

    Science.gov (United States)

    Robin, Frédérique

    2006-09-01

    This paper deals with French norms for mental image versus picture agreement for 138 pictures and the imagery value for 138 concrete words and 69 abstract words. The pictures were selected from Snodgrass et Vanderwart's norms (1980). The concrete words correspond to the dominant naming response to the pictorial stimuli. The abstract words were taken from verbal associative norms published by Ferrand (2001). The norms were established according to two variables: 1) mental image vs. picture agreement, and 2) imagery value of words. Three other variables were controlled: 1) picture naming agreement; 2) familiarity of objects referred to in the pictures and the concrete words, and 3) subjective verbal frequency of words. The originality of this work is to provide French imagery norms for the three kinds of stimuli usually compared in research on dual coding. Moreover, these studies focus on figurative and verbal stimuli variations in visual imagery processes.

  15. Where and how morphologically complex words interplay with naming pictures.

    Science.gov (United States)

    Zwitserlood, Pienie; Bölte, Jens; Dohmes, Petra

    2002-01-01

    Two picture-word experiments are reported in which a delay of 7 to 10 was introduced between distractor and picture. Distractor words were either derived words (Experiment 1) or compounds (Experiment 2), morphologically related to the picture name. In both experiments, the position of morphological overlap between distractor (e.g., rosebud vs tea-rose) and picture name (rose) was manipulated. Clear facilitation of picture naming latencies was obtained when pictures were paired with morphological distractors, and effects were independent of distractor type and position of overlap. The results are evaluated against "full listing" and "decomposition" approaches of morphological representation. Copyright 2002 Elsevier Science (USA).

  16. Age of acquisition and word frequency in written picture naming.

    Science.gov (United States)

    Bonin, P; Fayol, M; Chalard, M

    2001-05-01

    This study investigates age of acquisition (AoA) and word frequency effects in both spoken and written picture naming. In the first two experiments, reliable AoA effects on object naming speed, with objective word frequency controlled for, were found in both spoken (Experiment 1) and written picture naming (Experiment 2). In contrast, no reliable objective word frequency effects were observed on naming speed, with AoA controlled for, in either spoken (Experiment 3) or written (Experiment 4) picture naming. The implications of the findings for written picture naming are briefly discussed.

  17. Attention and gaze control in picture naming, word reading, and word categorizing

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2007-01-01

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment

  18. The word-frequency paradox for recall/recognition occurs for pictures.

    Science.gov (United States)

    Karlsen, Paul Johan; Snodgrass, Joan Gay

    2004-08-01

    A yes-no recognition task and two recall tasks were conducted using pictures of high and low familiarity ratings. Picture familiarity had analogous effects to word frequency, and replicated the word-frequency paradox in recall and recognition. Low-familiarity pictures were more recognizable than high-familiarity pictures, pure lists of high-familiarity pictures were more recallable than pure lists of low-familiarity pictures, and there was no effect of familiarity for mixed lists. These results are consistent with the predictions of the Search of Associative Memory (SAM) model.

  19. Sex differences in memory estimates for pictures and words.

    Science.gov (United States)

    Ionescu, M D

    2000-08-01

    Memory performance estimates of men and women before and after a recall test were investigated. College students (17 men and 20 women), all juniors, participated in a memory task involving the recall of 80 stimuli (40 pictures and 40 words). Before and after the task they were asked to provide estimates of their pre- and postrecall performance. Although no sex differences were found for total correct recall, recall for pictures, and recall for words, or in the estimates of memory performance before the recall task, there were significant differences after the test: women underestimated their performance on the words and men underestimated their performance on the picture items.

  20. Distance-Dependent Processing of Pictures and Words

    Science.gov (United States)

    Amit, Elinor; Algom, Daniel; Trope, Yaacov

    2009-01-01

    A series of 8 experiments investigated the association between pictorial and verbal representations and the psychological distance of the referent objects from the observer. The results showed that people better process pictures that represent proximal objects and words that represent distal objects than pictures that represent distal objects and…

  1. Dissociations between word and picture naming in Persian speakers with aphasia

    Directory of Open Access Journals (Sweden)

    Mehdi Bakhtiar

    2014-04-01

    Full Text Available Studies of patients with aphasia have found dissociations in their ability to read words and name pictures (Hillis & Caramazza, 1995; Hillis & Caramazza, 1991. Persian orthography is characterised by nearly regular orthography-phonology (OP mappings however, the omission of some vowels in the script makes the OP mapping of many words less predictable. The aim of this study was to compare the predictive lexico-semantic variables across reading and picture naming tasks in Persian aphasia while considering the variability across participants and items using mixed modeling. Methods and Results A total of 21 brain-injured Persian-speaking patients suffering from aphasia were asked to name 200 normalized Snodgrass object pictures and words taken from Bakhtiar, Nilipour and Weekes (2013 in different sessions. The results showed that word naming performance was significantly better than object naming in Persian speakers with aphasia (p<0.0001. Applying McNemar’s test to examine individual differences found that 18 patients showed significantly better performance in word reading compared to picture naming, 2 patients showed no difference between naming and reading (i.e. case 1 and 10, and one patient (i.e. case 5 showed significantly better naming compared to reading χ (1=10.23, p< 0.01 (see also Figure 1. A mixed-effect logistic regression analysis revealed that the degree of spelling transparency (i.e. the number of letters in a word divided by the number of its phonemes had an effect on word naming (along with frequency, age of acquisition (AoA, and imageability and picture naming (along with image agreement, AoA, word length, frequency and name agreement with a much stronger effect on the word naming task (b= 1.67, SE= 0.41, z= 4.05, p< 0.0001 compared to the picture naming task (b= -0.64, SE= 0.32, z= 2, p< 0.05. Conclusion The dissociation between word naming and picture naming shown by many patients suggests at least two routes are available

  2. Why do pictures, but not visual words, reduce older adults' false memories?

    Science.gov (United States)

    Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R

    2015-09-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  3. Errorless discrimination and picture fading as techniques for teaching sight words to TMR students.

    Science.gov (United States)

    Walsh, B F; Lamberts, F

    1979-03-01

    The effectiveness of two approaches for teaching beginning sight words to 30 TMR students was compared. In Dorry and Zeaman's picture-fading technique, words are taught through association with pictures that are faded out over a series of trials, while in the Edmark program errorless-discrimination technique, words are taught through shaped sequences of visual and auditory--visual matching-to-sample, with the target word first appearing alone and eventually appearing with orthographically similar words. Students were instructed on two lists of 10 words each, one list in the picture-fading and one in the discrimination method, in a double counter-balanced, repeated-measures design. Covariance analysis on three measures (word identification, word recognition, and picture--word matching) showed highly significant differences between the two methods. Students' performance was better after instruction with the errorless-discrimination method than after instruction with the picture-fading method. The findings on picture fading were interpreted as indicating a possible failure of the shifting of control from picture to printed word that earlier researchers have hypothesized as occurring.

  4. Short-term retention of pictures and words: evidence for dual coding systems.

    Science.gov (United States)

    Pellegrino, J W; Siegel, A W; Dhawan, M

    1975-03-01

    The recall of picture and word triads was examined in three experiments that manipulated the type of distraction in a Brown-Peterson short-term retention task. In all three experiments recall of pictures was superior to words under auditory distraction conditions. Visual distraction produced high performance levels with both types of stimuli, whereas combined auditory and visual distraction significantly reduced picture recall without further affecting word recall. The results were interpreted in terms of the dual coding hypothesis and indicated that pictures are encoded into separate visual and acoustic processing systems while words are primarily acoustically encoded.

  5. The effects of recall-concurrent visual-motor distraction on picture and word recall.

    Science.gov (United States)

    Warren, M W

    1977-05-01

    The dual-coding model (Paivio, 1971, 1975) predicts a larger imaginal component in the recall of pictures relative to words and a larger imaginal component in the recall of concrete words relative to abstract words. These predictions were tested by examining the effect of a recall-concurrent imagery-suppression task (pursuit-rotor tracking) on the recall of pictures vs picture labels and on the recall of concrete words vs abstract words. The results showed that recall-concurrent pursuit-rotor tracking interfered with picture recall, but not word recall (Experiments 1 and 2); however, there was no evidence of an effect of recall-concurrent tracking on the recall of concrete words (Experiment 3). The results suggested a revision of the dual-coding model.

  6. Electrophysiological differences in the processing of affective information in words and pictures.

    Science.gov (United States)

    Hinojosa, José A; Carretié, Luis; Valcárcel, María A; Méndez-Bértolo, Constantino; Pozo, Miguel A

    2009-06-01

    It is generally assumed that affective picture viewing is related to higher levels of physiological arousal than is the reading of emotional words. However, this assertion is based mainly on studies in which the processing of either words or pictures has been investigated under heterogenic conditions. Positive, negative, relaxing, neutral, and background (stimulus fragments) words and pictures were presented to subjects in two experiments under equivalent experimental conditions. In Experiment 1, neutral words elicited an enhanced late positive component (LPC) that was associated with an increased difficulty in discriminating neutral from background stimuli. In Experiment 2, high-arousing pictures elicited an enhanced early negativity and LPC that were related to a facilitated processing for these stimuli. Thus, it seems that under some circumstances, the processing of affective information captures attention only with more biologically relevant stimuli. Also, these data might be better interpreted on the basis of those models that postulate a different access to affective information for words and pictures.

  7. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    Science.gov (United States)

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  8. The Words Children Hear: Picture Books and the Statistics for Language Learning.

    Science.gov (United States)

    Montag, Jessica L; Jones, Michael N; Smith, Linda B

    2015-09-01

    Young children learn language from the speech they hear. Previous work suggests that greater statistical diversity of words and of linguistic contexts is associated with better language outcomes. One potential source of lexical diversity is the text of picture books that caregivers read aloud to children. Many parents begin reading to their children shortly after birth, so this is potentially an important source of linguistic input for many children. We constructed a corpus of 100 children's picture books and compared word type and token counts in that sample and a matched sample of child-directed speech. Overall, the picture books contained more unique word types than the child-directed speech. Further, individual picture books generally contained more unique word types than length-matched, child-directed conversations. The text of picture books may be an important source of vocabulary for young children, and these findings suggest a mechanism that underlies the language benefits associated with reading to children. © The Author(s) 2015.

  9. Time course of syllabic and sub-syllabic processing in Mandarin word production: Evidence from the picture-word interference paradigm.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2017-06-05

    The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.

  10. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language.

    Science.gov (United States)

    Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela

    2017-01-01

    Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word's meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training.

  11. Sex differences in memory estimates for pictures and words with multiple recall trials.

    Science.gov (United States)

    Ionescu, Marcos D

    2004-04-01

    Undergraduate students (23 men and 23 women) provided memory performance estimates before and after each of three recall trials involving 80 stimuli (40 pictures and 40 words). No sex differences were found across trials for the total recall of items or for the recall of pictures and words separately. A significant increase in recall for pictures (not words) was found for both sexes across trials. The previous results of Ionescu were replicated on the first and second recall trials: men underestimated their performance on the pictures and women underestimated their performance on the word items. These differences in postrecall estimates were not found after the third recall trial: men and women alike underestimated their performance on both the picture and word items. The disappearance of item-specific sex differences in postrecall estimates for the third recall trial does not imply that men and women become more accurate at estimating their actual performance with multiple recall trials.

  12. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language

    Directory of Open Access Journals (Sweden)

    Claudia Repetto

    2017-12-01

    Full Text Available Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2. Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word’s meaning or by the learner performing a gesture to the word (enactment. Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1 reading, (2 reading and pairing the novel word to a picture, and (3 reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1. When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training.

  13. Sight Word Recognition among Young Children At-Risk: Picture-Supported vs. Word-Only

    Science.gov (United States)

    Meadan, Hedda; Stoner, Julia B.; Parette, Howard P.

    2008-01-01

    A quasi-experimental design was used to investigate the impact of Picture Communication Symbols (PCS) on sight word recognition by young children identified as "at risk" for academic and social-behavior difficulties. Ten pre-primer and 10 primer Dolch words were presented to 23 students in the intervention group and 8 students in the…

  14. The role of verbal and pictorial information in multimodal incidental acquisition of foreign language vocabulary.

    Science.gov (United States)

    Bisson, Marie-Josée; van Heuven, Walter J B; Conklin, Kathy; Tunney, Richard J

    2015-01-01

    This study used eye tracking to investigate the allocation of attention to multimodal stimuli during an incidental learning situation, as well as its impact on subsequent explicit learning. Participants were exposed to foreign language (FL) auditory words on their own, in conjunction with written native language (NL) translations, or with both written NL translations and pictures. Incidental acquisition of FL words was assessed the following day through an explicit learning task where participants learned to recognize translation equivalents, as well as one week later through recall and translation recognition tests. Results showed higher accuracy scores in the explicit learning task for FL words presented with meaning during incidental learning, whether written meaning or both written meaning and picture, than for FL words presented auditorily only. However, participants recalled significantly more FL words after a week delay if they had been presented with a picture during incidental learning. In addition, the time spent looking at the pictures during incidental learning significantly predicted recognition and recall scores one week later. Overall, results demonstrated the impact of exposure to multimodal stimuli on subsequent explicit learning, as well as the important role that pictorial information can play in incidental vocabulary acquisition.

  15. Cross-cultural evidence for multimodal motherese: Asian Indian mothers' adaptive use of synchronous words and gestures.

    Science.gov (United States)

    Gogate, Lakshmi; Maganti, Madhavilatha; Bahrick, Lorraine E

    2015-01-01

    In a quasi-experimental study, 24 Asian Indian mothers were asked to teach novel (target) names for two objects and two actions to their children of three different levels of lexical mapping development: prelexical (5-8 months), early lexical (9-17 months), and advanced lexical (20-43 months). Target naming (n=1482) and non-target naming (other, n=2411) were coded for synchronous spoken words and object motion (multimodal motherese) and other naming styles. Indian mothers abundantly used multimodal motherese with target words to highlight novel word-referent relations, paralleling earlier findings from American mothers. They used it with target words more often for prelexical infants than for advanced lexical children and to name target actions later in children's development. Unlike American mothers, Indian mothers also abundantly used multimodal motherese to name target objects later in children's development. Finally, monolingual mothers who spoke a verb-dominant Indian language used multimodal motherese more often than bilingual mothers who also spoke noun-dominant English to their children. The findings suggest that within a dynamic and reciprocal mother-infant communication system, multimodal motherese adapts to unify novel words and referents across cultures. It adapts to children's level of lexical development and to ambient language-specific lexical dominance hierarchies. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Storage and retrieval properties of dual codes for pictures and words in recognition memory.

    Science.gov (United States)

    Snodgrass, J G; McClure, P

    1975-09-01

    Storage and retrieval properties of pictures and words were studied within a recognition memory paradigm. Storage was manipulated by instructing subjects either to image or to verbalize to both picture and word stimuli during the study sequence. Retrieval was manipulated by representing a proportion of the old picture and word items in their opposite form during the recognition test (i.e., some old pictures were tested with their corresponding words and vice versa). Recognition performance for pictures was identical under the two instructional conditions, whereas recognition performance for words was markedly superior under the imagery instruction condition. It was suggested that subjects may engage in dual coding of simple pictures naturally, regardless of instructions, whereas dual coding of words may occur only under imagery instructions. The form of the test item had no effect on recognition performance for either type of stimulus and under either instructional condition. However, change of form of the test item markedly reduced item-by-item correlations between the two instructional conditions. It is tentatively proposed that retrieval is required in recognition, but that the effect of a form change is simply to make the retrieval process less consistent, not less efficient.

  17. Effects of Multimodal Information on Learning Performance and Judgment of Learning

    Science.gov (United States)

    Chen, Gongxiang; Fu, Xiaolan

    2003-01-01

    Two experiments were conducted to investigate the effects of multimodal information on learning performance and judgment of learning (JOL). Experiment 1 examined the effects of representation type (word-only versus word-plus-picture) and presentation channel (visual-only versus visual-plus-auditory) on recall and immediate-JOL in fixed-rate…

  18. Cross-Cultural Evidence for Multimodal Motherese: Asian-Indian Mothers’ Adaptive Use of Synchronous Words and Gestures

    Science.gov (United States)

    Maganti, Madhavilatha; Bahrick, Lorraine E.

    2014-01-01

    In a quasi-experimental study, twenty-four Asian-Indian mothers were asked to teach novel (target) names for two objects and two actions to their children of three different levels of lexical-mapping development, pre-lexical (5–8 months), early-lexical (9–17 months), and advanced-lexical (20–43 months). Target (N = 1482) and non-target (other, N = 2411) naming was coded for synchronous spoken words and object motion (multimodal motherese) and other naming styles. Indian mothers abundantly used multimodal motherese with target words to highlight novel word-referent relations, paralleling earlier findings from American mothers (Gogate, Bahrick, & Watson, 2000). They used it with target words more often for pre-lexical infants than advanced-lexical children, and to name target actions later into children’s development. Unlike American mothers, Indian mothers also abundantly used multimodal motherese to name target objects later into children’s development. Finally, monolingual mothers who spoke a verb-dominant Indian language used multimodal motherese more often than bilingual mothers who also spoke noun-dominant English to their child. The findings suggest that within a dynamic and reciprocal mother-infant communication system, multimodal motherese adapts to unify novel words and referents across cultures. It adapts to children’s level of lexical development and to ambient language-specific lexical-dominance hierarchies. PMID:25285369

  19. Emotional Facilitation Effect in the Picture-Word Interference Task: An ERP Study

    Science.gov (United States)

    Liu, Baolin; Xin, Shuai; Jin, Zhixing; Hu, Yu; Li, Yang

    2010-01-01

    In this paper, we aimed to verify the emotional facilitation effect in the picture-word interference task using event-related potentials. Twenty-one healthy subjects were asked to categorize the emotional valences of pictures accompanied by emotionally congruent, either centrally or laterally positioned Chinese words. For both the foveal and…

  20. Improvement of encoding and retrieval in normal and pathological aging with word-picture paradigm.

    Science.gov (United States)

    Iodice, Rosario; Meilán, Juan José G; Carro, Juan

    2015-01-01

    During the aging process, there is a progressive deficit in the encoding of new information and its retrieval. Different strategies are used in order to maintain, optimize or diminish these deficits in people with and without dementia. One of the classic techniques is paired-associate learning (PAL), which is based on improving the encoding of memories, but it has yet to be used to its full potential in people with dementia. In this study, our aim is to corroborate the importance of PAL tasks as instrumental tools for creating contextual cues, during both the encoding and retrieval phases of memory. Additionally, we aim to identify the most effective form of presenting the related items. Pairs of stimuli were shown to healthy elderly people and to patients with moderate and mild Alzheimer's disease. The encoding conditions were as follows: word/word, picture/picture, picture/word, and word/picture. Associative cued recall of the second item in the pair shows that retrieval is higher for the word/picture condition in the two groups of patients with dementia when compared to the other conditions, while word/word is the least effective in all cases. These results confirm that PAL is an effective tool for creating contextual cues during both the encoding and retrieval phases in people with dementia when the items are presented using the word/picture condition. In this way, the encoding and retrieval deficit can be reduced in these people.

  1. Implicit and explicit attention to pictures and words: An fMRI-study of concurrent emotional stimulus processing

    Directory of Open Access Journals (Sweden)

    Tobias eFlaisch

    2015-12-01

    Full Text Available The present study utilized functional magnetic resonance imaging (fMRI to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.

  2. Implicit and Explicit Attention to Pictures and Words: An fMRI-Study of Concurrent Emotional Stimulus Processing.

    Science.gov (United States)

    Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T

    2015-01-01

    The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.

  3. Implicit and Explicit Attention to Pictures and Words: An fMRI-Study of Concurrent Emotional Stimulus Processing

    Science.gov (United States)

    Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T.

    2015-01-01

    The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms. PMID:26733895

  4. Event-related potentials and recognition memory for pictures and words: the effects of intentional and incidental learning.

    Science.gov (United States)

    Noldy, N E; Stelmack, R M; Campbell, K B

    1990-07-01

    Event-related potentials were recorded under conditions of intentional or incidental learning of pictures and words, and during the subsequent recognition memory test for these stimuli. Intentionally learned pictures were remembered better than incidentally learned pictures and intentionally learned words, which, in turn, were remembered better than incidentally learned words. In comparison to pictures that were ignored, the pictures that were attended were characterized by greater positive amplitude frontally at 250 ms and centro-parietally at 350 ms and by greater negativity at 450 ms at parietal and occipital sites. There were no effects of attention on the waveforms elicited by words. These results support the view that processing becomes automatic for words, whereas the processing of pictures involves additional effort or allocation of attentional resources. The N450 amplitude was greater for words than for pictures during both acquisition (intentional items) and recognition phases (hit and correct rejection categories for intentional items, hit category for incidental items). Because pictures are better remembered than words, the greater late positive wave (600 ms) elicited by the pictures than the words during the acquisition phase is also consistent with the association between P300 and better memory that has been reported.

  5. Can pictures promote the acquisition of sight-word reading? An evaluation of two potential instructional strategies.

    Science.gov (United States)

    Richardson, Amy R; Lerman, Dorothea C; Nissen, Melissa A; Luck, Kally M; Neal, Ashley E; Bao, Shimin; Tsami, Loukia

    2017-01-01

    Sight-word instruction can be a useful supplement to phonics-based methods under some circumstances. Nonetheless, few studies have evaluated the conditions under which pictures may be used successfully to teach sight-word reading. In this study, we extended prior research by examining two potential strategies for reducing the effects of overshadowing when using picture prompts. Five children with developmental disabilities and two typically developing children participated. In the first experiment, the therapist embedded sight words within pictures but gradually faded in the pictures as needed using a least-to-most prompting hierarchy. In the second experiment, the therapist embedded text-to-picture matching within the sight-word reading sessions. Results suggested that these strategies reduced the interference typically observed with picture prompts and enhanced performance during teaching sessions for the majority of participants. Text-to-picture matching also accelerated mastery of the sight words relative to a condition under which the therapist presented text without pictures. © 2016 Society for the Experimental Analysis of Behavior.

  6. Words vs. Pictures: Perceived Impact and Connotative Meaning

    Science.gov (United States)

    Culbertson, Hugh M.

    1974-01-01

    Results of two studies indicate that word messages carry more impact than pictures and an analysis of variance reveals that iconicity and sensationalism each related positively to both evaluative-ethical and interest-vitality ratings. (RB)

  7. Word and picture matching: a PET study of semantic category effects.

    Science.gov (United States)

    Perani, D; Schnur, T; Tettamanti, M; Gorno-Tempini, M; Cappa, S F; Fazio, F

    1999-03-01

    We report two positron emission tomography (PET) studies of cerebral activation during picture and word matching tasks, in which we compared directly the processing of stimuli belonging to different semantic categories (animate and inanimate) in the visual (pictures) and verbal (words) modality. In the first experiment, brain activation was measured in eleven healthy adults during a same/different matching task for textures, meaningless shapes and pictures of animals and artefacts (tools). Activations for meaningless shapes when compared to visual texture discrimination were localized in the left occipital and inferior temporal cortex. Animal picture identification, either in the comparison with meaningless shapes and in the direct comparison with non-living pictures, involved primarily activation of occipital regions, namely the lingual gyrus bilaterally and the left fusiform gyrus. For artefact picture identification, in the same comparison with meaningless shape-baseline and in the direct comparison with living pictures, all activations were left hemispheric, through the dorsolateral frontal (Ba 44/6 and 45) and temporal (Ba 21, 20) cortex. In the second experiment, brain activation was measured in eight healthy adults during a same/different matching task for visually presented words referring to animals and manipulable objects (tools); the baseline was a pseudoword discrimination task. When compared with the tool condition, the animal condition activated posterior left hemispheric areas, namely the fusiform (Ba 37) and the inferior occipital gyrus (Ba 18). The right superior parietal lobule (Ba 7) and the left thalamus were also activated. The reverse comparison (tools vs animals) showed left hemispheric activations in the middle temporal gyrus (Ba 21) and precuneus (Ba 7), as well as bilateral activation in the occipital regions. These results are compatible with different brain networks subserving the identification of living and non-living entities; in

  8. Short-Term Memory for Pictures and Words by Mentally Retarded and Nonretarded Persons.

    Science.gov (United States)

    Ellis, Norman R.; Wooldridge, Peter W.

    1985-01-01

    Twelve mentally retarded and 12 nonretarded adults were compared in a Brown-Peterson short-term memory task for the retention of words and pictures over intervals up to 30 seconds. The retarded subjects forgot more rapidly over the initial 10 seconds. They also retained pictures better than they did words. (Author/DB)

  9. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis.

    Science.gov (United States)

    McBride, Dawn M; Anne Dosher, Barbara

    2002-09-01

    Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.

  10. Event-related brain responses to emotional words, pictures, and faces - a cross-domain comparison.

    Science.gov (United States)

    Bayer, Mareike; Schacht, Annekathrin

    2014-01-01

    Emotion effects in event-related brain potentials (ERPs) have previously been reported for a range of visual stimuli, including emotional words, pictures, and facial expressions. Still, little is known about the actual comparability of emotion effects across these stimulus classes. The present study aimed to fill this gap by investigating emotion effects in response to words, pictures, and facial expressions using a blocked within-subject design. Furthermore, ratings of stimulus arousal and valence were collected from an independent sample of participants. Modulations of early posterior negativity (EPN) and late positive complex (LPC) were visible for all stimulus domains, but showed clear differences, particularly in valence processing. While emotion effects were limited to positive stimuli for words, they were predominant for negative stimuli in pictures and facial expressions. These findings corroborate the notion of a positivity offset for words and a negativity bias for pictures and facial expressions, which was assumed to be caused by generally lower arousal levels of written language. Interestingly, however, these assumed differences were not confirmed by arousal ratings. Instead, words were rated as overall more positive than pictures and facial expressions. Taken together, the present results point toward systematic differences in the processing of written words and pictorial stimuli of emotional content, not only in terms of a valence bias evident in ERPs, but also concerning their emotional evaluation captured by ratings of stimulus valence and arousal.

  11. Selective Inhibition and Naming Performance in Semantic Blocking, Picture-Word Interference, and Color-Word Stroop Tasks

    Science.gov (United States)

    Shao, Zeshu; Roelofs, Ardi; Martin, Randi C.; Meyer, Antje S.

    2015-01-01

    In 2 studies, we examined whether explicit distractors are necessary and sufficient to evoke selective inhibition in 3 naming tasks: the semantic blocking, picture-word interference, and color-word Stroop task. Delta plots were used to quantify the size of the interference effects as a function of reaction time (RT). Selective inhibition was…

  12. Memory for Pictures, Words, and Spatial Location in Older Adults: Evidence for Pictorial Superiority.

    Science.gov (United States)

    Park, Denise Cortis; And Others

    1983-01-01

    Tested recognition memory for items and spatial location by varying picture and word stimuli across four slide quadrants. Results showed a pictorial superiority effect for item recognition and a greater ability to remember the spatial location of pictures versus words for both old and young adults (N=95). (WAS)

  13. Case Study: A Picture Worth a Thousand Words? Making a Case for Video Case Studies

    Science.gov (United States)

    Pai, Aditi

    2014-01-01

    A picture, they say, is worth a thousand words. If a mere picture is worth a thousand words, how much more are "moving pictures" or videos worth? The author poses this not merely as a rhetorical question, but because she wishes to make a case for using videos in the traditional case study method. She recommends four main approaches of…

  14. Time displacement pictures with multi-mode probes from circumferential welds

    International Nuclear Information System (INIS)

    Wustenberg, H.; Jaffrey, D.; Ludwig, B.; Bertus, N.; Erhard, A.

    1985-01-01

    If a creeping wave probe is applied to butt welds typical echo patterns from weld defects can be received. It seems possible that echoes from the geometric shape of the root or the crown and defect echoes can be separated by simple means. This has been the reason for the development of a special presentation of the echo patterns received by this multi-mode creeping wave probe. The so called time displacement pictures show the AD-converted A-scans in a gray scale along a line corresponding to the time axis of the propagation. Perpendicular to this time axis results obtained from displacement of the probe parallel to the weld are presented. This kind of picture immediately provides the whole A-scan information. This paper presents some first results on simulated welds with artificial defects and on circumferential welds with typical geometric imperfections

  15. Picture-Word Differences in Discrimination Learning: II. Effects of Conceptual Categories.

    Science.gov (United States)

    Bourne, Lyle E., Jr.; And Others

    A well established finding in the discrimination learning literature is that pictures are learned more rapidly than their associated verbal labels. It was hypothesized in this study that the usual superiority of pictures over words in a discrimination list containing same-instance repetitions would disappear in a discrimination list containing…

  16. Influence of Suboptimally and Optimally Presented Affective Pictures and Words on Consumption-Related Behavior

    Science.gov (United States)

    Winkielman, Piotr; Gogolushko, Yekaterina

    2018-01-01

    Affective stimuli can influence immediate reactions as well as spontaneous behaviors. Much evidence for such influence comes from studies of facial expressions. However, it is unclear whether these effects hold for other affective stimuli, and how the amount of stimulus processing changes the nature of the influence. This paper addresses these issues by comparing the influence on consumption behaviors of emotional pictures and valence-matched words presented at suboptimal and supraliminal durations. In Experiment 1, both suboptimal and supraliminal emotional facial expressions influenced consumption in an affect-congruent, assimilative way. In Experiment 2, pictures of both high- and low-frequency emotional objects congruently influenced consumption. In comparison, words tended to produce incongruent effects. We discuss these findings in light of privileged access theories, which hold that pictures better convey affective meaning than words, and embodiment theories, which hold that pictures better elicit somatosensory and motor responses. PMID:29434556

  17. Words Versus Pictures: Leveraging the Research on Visual Communication

    Directory of Open Access Journals (Sweden)

    Pauline Dewan

    2015-06-01

    Full Text Available Librarians, like many other occupations, tend to rely on text and underutilize graphics. Research on visual communication shows that pictures have a number of advantages over words. We can interact more effectively with colleagues and patrons by incorporating ideas from this research.

  18. Event-related brain responses to emotional words, pictures, and faces – a cross-domain comparison

    Science.gov (United States)

    Bayer, Mareike; Schacht, Annekathrin

    2014-01-01

    Emotion effects in event-related brain potentials (ERPs) have previously been reported for a range of visual stimuli, including emotional words, pictures, and facial expressions. Still, little is known about the actual comparability of emotion effects across these stimulus classes. The present study aimed to fill this gap by investigating emotion effects in response to words, pictures, and facial expressions using a blocked within-subject design. Furthermore, ratings of stimulus arousal and valence were collected from an independent sample of participants. Modulations of early posterior negativity (EPN) and late positive complex (LPC) were visible for all stimulus domains, but showed clear differences, particularly in valence processing. While emotion effects were limited to positive stimuli for words, they were predominant for negative stimuli in pictures and facial expressions. These findings corroborate the notion of a positivity offset for words and a negativity bias for pictures and facial expressions, which was assumed to be caused by generally lower arousal levels of written language. Interestingly, however, these assumed differences were not confirmed by arousal ratings. Instead, words were rated as overall more positive than pictures and facial expressions. Taken together, the present results point toward systematic differences in the processing of written words and pictorial stimuli of emotional content, not only in terms of a valence bias evident in ERPs, but also concerning their emotional evaluation captured by ratings of stimulus valence and arousal. PMID:25339927

  19. Selective activation around the left occipito-temporal sulcus for words relative to pictures: individual variability or false positives?

    Science.gov (United States)

    Wright, Nicholas D; Mechelli, Andrea; Noppeney, Uta; Veltman, Dick J; Rombouts, Serge A R B; Glensman, Janice; Haynes, John-Dylan; Price, Cathy J

    2008-08-01

    We used high-resolution fMRI to investigate claims that learning to read results in greater left occipito-temporal (OT) activation for written words relative to pictures of objects. In the first experiment, 9/16 subjects performing a one-back task showed activation in > or =1 left OT voxel for words relative to pictures (P or =1 left OT voxel for words relative to pictures. However, at this low statistical threshold false positives need to be excluded. The semantic decision paradigm was therefore repeated, within subject, in two different scanners (1.5 and 3 T). Both scanners consistently localised left OT activation for words relative to fixation and pictures relative to words, but there were no consistent effects for words relative to pictures. Finally, in a third experiment, we minimised the voxel size (1.5 x 1.5 x 1.5 mm(3)) and demonstrated a striking concordance between the voxels activated for words and pictures, irrespective of task (naming vs. one-back) or script (English vs. Hebrew). In summary, although we detected differential activation for words relative to pictures, these effects: (i) do not withstand statistical rigour; (ii) do not replicate within or between subjects; and (iii) are observed in voxels that also respond to pictures of objects. Our findings have implications for the role of left OT activation during reading. More generally, they show that studies using low statistical thresholds in single subject analyses should correct the statistical threshold for the number of comparisons made or replicate effects within subject. (c) 2007 Wiley-Liss, Inc.

  20. Older adults' memory for the color of pictures and words.

    Science.gov (United States)

    Park, D C; Puglisi, J T

    1985-03-01

    Young and older adults were presented line drawings or matched words for study that were colored either red, green, yellow, or blue. Half of the research participants were instructed to remember the item and its color (intentional condition), whereas the other half studied only the item (incidental condition). Participants indicated their recognition of items and the color they believed positively recognized items were, regardless of their initial encoding instructions. Data analyses yielded evidence for a decline in color memory in old compared with young adults, particularly with respect to pictures. The color of pictures was generally better remembered than the color of words, particularly in the incidental memory conditions. The discussion suggests the effort required to remember color varies as a function of the stimulus with which it is associated.

  1. When does picture naming take longer than word reading?

    Directory of Open Access Journals (Sweden)

    Andrea eValente

    2016-01-01

    Full Text Available Differences between the cognitive processes involved in word reading and picture naming are well established (e.g. visual or lexico-semantic stages. Still, it is commonly thought that retrieval of phonological forms is shared across tasks. We report a test of this second hypothesis based on the time course of electroencephalographic (EEG neural activity, reasoning that similar EEG patterns might index similar processing stages.Seventeen participants named objects and read aloud the corresponding words while their behavior and EEG activity were recorded. The latter was analyzed from stimulus onset onwards (stimulus-locked analysis and from response onset backwards (response-locked analysis, using non-parametric statistics and the spatio-temporal segmentation of ERPs.Behavioral results confirmed that reading entails shorter latencies than naming. The analysis of EEG activity within the stimulus-to-response period allowed distinguishing three phases, broadly successive. Early on, we observed identical distribution of electric field potentials (i.e. topographies albeit with large amplitude divergences between tasks. Then, we observed sustained cross-task differences in topographies accompanied by extended amplitude differences. Finally, the two tasks again revealed the same topographies, with significant cross-task delays in their onsets and offsets, and still significant amplitude differences. In the response-locked ERPs, the common topography displayed an offset closer to response articulation in word reading compared with picture naming, that is the transition between the offset of this shared map and the onset of articulation was significantly faster in word reading.The results suggest that the degree of cross-task similarity varies across time. The first phase suggests similar visual processes of variable intensity and time course across tasks, while the second phase suggests marked differences. Finally, similarities and differences within the

  2. Functional anatomic studies of memory retrieval for auditory words and visual pictures.

    Science.gov (United States)

    Buckner, R L; Raichle, M E; Miezin, F M; Petersen, S E

    1996-10-01

    Functional neuroimaging with positron emission tomography was used to study brain areas activated during memory retrieval. Subjects (n = 15) recalled items from a recent study episode (episodic memory) during two paired-associate recall tasks. The tasks differed in that PICTURE RECALL required pictorial retrieval, whereas AUDITORY WORD RECALL required word retrieval. Word REPETITION and REST served as two reference tasks. Comparing recall with repetition revealed the following observations. (1) Right anterior prefrontal activation (similar to that seen in several previous experiments), in addition to bilateral frontal-opercular and anterior cingulate activations. (2) An anterior subdivision of medial frontal cortex [pre-supplementary motor area (SMA)] was activated, which could be dissociated from a more posterior area (SMA proper). (3) Parietal areas were activated, including a posterior medial area near precuneus, that could be dissociated from an anterior parietal area that was deactivated. (4) Multiple medial and lateral cerebellar areas were activated. Comparing recall with rest revealed similar activations, except right prefrontal activation was minimal and activations related to motor and auditory demands became apparent (e.g., bilateral motor and temporal cortex). Directly comparing picture recall with auditory word recall revealed few notable activations. Taken together, these findings suggest a pathway that is commonly used during the episodic retrieval of picture and word stimuli under these conditions. Many areas in this pathway overlap with areas previously activated by a different set of retrieval tasks using stem-cued recall, demonstrating their generality. Examination of activations within individual subjects in relation to structural magnetic resonance images provided an-atomic information about the location of these activations. Such data, when combined with the dissociations between functional areas, provide an increasingly detailed picture of

  3. Picture-Word Differences in Discrimination Learning: 11. Effects of Conceptual Categories

    Science.gov (United States)

    Bourne, Lyle E.; And Others

    1976-01-01

    Investigates the prediction that the usual superiority of pictures over words for repetitions of the same items would disappear for items that were different instances of repeated categories. (Author/RK)

  4. Promoting Picture Word Inductive Model (PWIM to Develop Students’ Writing Skill

    Directory of Open Access Journals (Sweden)

    Fitri Novia

    2015-04-01

    Full Text Available Abstract: The objective of this study was to find out whether or not there was significant difference between students who were taught using picture word inductive model (PWIM and that of those who were not. The experimental method was used to conduct the study. The population of this study was the eighth grade students in SMP N 1 Sirah Pulau Padang.  Out of this pouplation, 68 students were taken as a sample and were divided  equally into two groups by using purposive sampling method. Therefore, class VIII 1 was the experimental group whereas VIII 3 as the control group, each of them consists of 34 students. The data were collected by asking students to write descriptive paragraph. To find out the validity, content validity was used. Inter-rater reliability was used to find out the reliability. T-test was used to analyze the data. Based on the result, the value of t-obtained was 3.155, at the significant level p<0.05 in two tailed testing with do = 66, the critical value of t-table = 1.9966. Since the value of t-obtained was higher than t-table, the Null Hypotheses (Ho was rejected and Alternative Analysis (Ha was accepted. It meant that there was significant difference between students who were taught using picture word inductive model (PWIM and that of those who were not. In conclusion, PWIM could help students to develop writing skill.   Key Word: Writing skill, Picture Word Inductive Model (PWIM.

  5. γ-oscillations modulated by picture naming and word reading: intracranial recording in epileptic patients.

    Science.gov (United States)

    Wu, Helen C; Nagasawa, Tetsuro; Brown, Erik C; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi

    2011-10-01

    We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Both tasks commonly elicited gamma-augmentation (maximally at 80-100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. The present study increases our understanding of the visual-language pathways. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. The impact of left and right intracranial tumors on picture and word recognition memory.

    Science.gov (United States)

    Goldstein, Bram; Armstrong, Carol L; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V

    2004-02-01

    This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH patient group obtained a significantly slower mean picture recognition reaction time than the RH group. The LH group had a higher proportion of tumors extending into the temporal lobes, possibly accounting for their greater pictorial processing impairments. Dual coding and enhanced visual imagery may have contributed to the patient groups' similar performance on the remainder of the measures.

  7. The Free and Cued Selective Reminding Test: Pictures vs. Words

    OpenAIRE

    Pettit, Annabel

    2013-01-01

    The present study tested a group of young (18-25) and old (>60) healthy adults to examine whether a pictorial superiority effect influences performance in the free and cued selective reminding test (FCSRT). 81 participants were recruited and performed the ACE-R, TOPF and FCSRT. Stimulus items for the FCSRT consisted of either 16 line drawings (in the picture form) or 16 written words (in the word form). The design was completely-between subjects and the form of test was fully counterbalanced...

  8. Strategies in Reading Comprehension: Individual Differences in Learning from Pictures and Words (A Footnote). Technical Report No. 300.

    Science.gov (United States)

    Levin, Joel R.; Guttmann, Joseph

    In a recent experiment it was discovered that although many children learn uniformly well (or poorly) from pictures and words, others learn appreciably better from pictures. The present study rules out an alternative explanation of those data--which had been produced on a single learning task containing both pictures and words--by obtaining…

  9. Picturing words? Sensorimotor cortex activation for printed words in child and adult readers

    Science.gov (United States)

    Dekker, Tessa M.; Mareschal, Denis; Johnson, Mark H.; Sereno, Martin I.

    2014-01-01

    Learning to read involves associating abstract visual shapes with familiar meanings. Embodiment theories suggest that word meaning is at least partially represented in distributed sensorimotor networks in the brain (Barsalou, 2008; Pulvermueller, 2013). We explored how reading comprehension develops by tracking when and how printed words start activating these “semantic” sensorimotor representations as children learn to read. Adults and children aged 7–10 years showed clear category-specific cortical specialization for tool versus animal pictures during a one-back categorisation task. Thus, sensorimotor representations for these categories were in place at all ages. However, co-activation of these same brain regions by the visual objects’ written names was only present in adults, even though all children could read and comprehend all presented words, showed adult-like task performance, and older children were proficient readers. It thus takes years of training and expert reading skill before spontaneous processing of printed words’ sensorimotor meanings develops in childhood. PMID:25463817

  10. Selective Activation Around the Left Occipito-Temporal Sulcus for Words Relative to Pictures: Individual Variability or False Positives?

    OpenAIRE

    Wright, Nicholas D; Mechelli, Andrea; Noppeney, Uta; Veltman, Dick J; Rombouts, Serge ARB; Glensman, Janice; Haynes, John-Dylan; Price, Cathy J

    2007-01-01

    We used high-resolution fMRI to investigate claims that learning to read results in greater left occipito-temporal (OT) activation for written words relative to pictures of objects. In the first experiment, 9/16 subjects performing a one-back task showed activation in ?1 left OT voxel for words relative to pictures (P < 0.05 uncorrected). In a second experiment, another 9/15 subjects performing a semantic decision task activated ?1 left OT voxel for words relative to pictures. However, at thi...

  11. Sentence Context and Word-Picture Cued-Recall Paired-Associate Learning Procedure Boosts Recall in Normal and Mild Alzheimer's Disease Patients.

    Science.gov (United States)

    Iodice, Rosario; Meilán, Juan José García; Ramos, Juan Carro; Small, Jeff A

    2018-01-01

    The aim of this study was to employ the word-picture paradigm to examine the effectiveness of combined pictorial illustrations and sentences as strong contextual cues. The experiment details the performance of word recall in healthy older adults (HOA) and mild Alzheimer's disease (AD). The researchers enhanced the words' recall with word-picture condition and when the pair was associated with a sentence contextualizing the two items. The sample was composed of 18 HOA and 18 people with mild AD. Participants memorized 15 pairs of words under word-word and word-picture conditions, with and without a sentence context. In the paired-associate test, the first item of the pair was read aloud by participants and used to elicit retrieval of the associated item. The findings suggest that both HOA and mild-AD pictures improved item recall compared to word condition such as sentences which further enabled item recall. Additionally, the HOA group performs better than the mild-AD group in all conditions. Word-picture and sentence context strengthen the encoding in the explicit memory task, both in HOA and mild AD. These results open a potential window to improve the memory for verbalized instructions and restore sequential abilities in everyday life, such as brushing one's teeth, fastening one's pants, or drying one's hands.

  12. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language

    OpenAIRE

    Claudia Repetto; Elisa Pedroli; Manuela Macedonia; Manuela Macedonia

    2017-01-01

    Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word’s meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of t...

  13. Emotional Picture and Word Processing: An fMRI Study on Effects of Stimulus Complexity

    Science.gov (United States)

    Schlochtermeier, Lorna H.; Kuchinke, Lars; Pehrs, Corinna; Urton, Karolina; Kappelhoff, Hermann; Jacobs, Arthur M.

    2013-01-01

    Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity. PMID:23409009

  14. Depression reduces perceptual sensitivity for positive words and pictures.

    Science.gov (United States)

    Atchley, Ruth Ann; Ilardi, Stephen S; Young, Keith M; Stroupe, Natalie N; O'Hare, Aminda J; Bistricky, Steven L; Collison, Elizabeth; Gibson, Linzi; Schuster, Jonathan; Lepping, Rebecca J

    2012-01-01

    There is evidence of maladaptive attentional biases for lexical information (e.g., Atchley, Ilardi, & Enloe, 2003; Atchley, Stringer, Mathias, Ilardi, & Minatrea, 2007) and for pictographic stimuli (e.g., Gotlib, Krasnoperova, Yue, & Joormann, 2004) among patients with depression. The current research looks for depressotypic processing biases among depressed out-patients and non-clinical controls, using both verbal and pictorial stimuli. A d' measure (sensitivity index) was used to examine each participant's perceptual sensitivity threshold. Never-depressed controls evidenced a detection bias for positive picture stimuli, while depressed participants had no such bias. With verbal stimuli, depressed individuals showed specific decrements in the detection of positive person-referent words (WINNER), but not with positive non-person-referent words (SUNSHINE) or with negative words. Never-depressed participants showed no such differences across word types. In the current study, depression is characterised both by an absence of the normal positivistic biases seen in individuals without mood disorders (consistent with McCabe & Gotlib, 1995), and by a specific reduction in sensitivity for person-referent positive information that might be inconsistent with depressotypic self-schemas.

  15. Preliminary application in teaching of medical imaging with picture archiving and communication systems

    International Nuclear Information System (INIS)

    Wei Yuqing; Hu Jian; Wang Xuejian; Cao Jun; Tong Juan; Shen Guiquan; Luo Min; Luo Song

    2003-01-01

    Objective: To evaluate PACS (picture archiving and communication systems) in the teaching of medical imaging. Methods: Large screen multimedia reading room and electronic study room were built with GE PACS and Angel RIS (radiology information system) and end-term picture-word work-station. Pictures and words of PACS were unloaded directly for teaching and teaching image bank and test image bank. Results: Large screen multimedia reading room, classroom, and electronic study room were built successfully. Valuable information of nearly 5000 patients in the teaching imaging bank of PACS was accumulated. Classic medical imaging teaching mode was changed. Real-time and multi-mode teaching were realized, and teaching effect was greatly improved. The PACS-based teaching model was accepted pleasantly by students. Conclusion: PACS is very useful to improve the teaching quality of medical imaging and it is worth to popularize

  16. Memorial familiarity remains intact for pictures but not for words in patients with amnestic mild cognitive impairment.

    Science.gov (United States)

    Embree, Lindsay M; Budson, Andrew E; Ally, Brandon A

    2012-07-01

    Understanding how memory breaks down in the earliest stages of Alzheimer's disease (AD) process has significant implications, both clinically and with respect to intervention development. Previous work has highlighted a robust picture superiority effect in patients with amnestic mild cognitive impairment (aMCI). However, it remains unclear as to how pictures improve memory compared to words in this patient population. In the current study, we utilized receiver operating characteristic (ROC) curves to obtain estimates of familiarity and recollection for pictures and words in patients with aMCI and healthy older controls. Analysis of accuracy shows that even when performance is matched between pictures and words in the healthy control group, patients with aMCI continue to show a significant picture superiority effect. The results of the ROC analysis showed that patients demonstrated significantly impaired recollection and familiarity for words compared controls. In contrast, patients with aMCI demonstrated impaired recollection, but intact familiarity for pictures, compared to controls. Based on previous work from our lab, we speculate that patients can utilize the rich conceptual information provided by pictures to enhance familiarity, and perceptual information may allow for post-retrieval monitoring or verification of the enhanced sense of familiarity. Alternatively, the combination of enhanced conceptual and perceptual fluency of the test item might drive a stronger or more robust sense of familiarity that can be accurately attributed to a studied item. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Learning disability subtypes and the role of attention during the naming of pictures and words: an event-related potential analysis.

    Science.gov (United States)

    Greenham, Stephanie L; Stelmack, Robert M; van der Vlugt, Harry

    2003-01-01

    The role of attention in the processing of pictures and words was investigated for a group of normally achieving children and for groups of learning disability sub-types that were defined by deficient performance on tests of reading and spelling (Group RS) and of arithmetic (Group A). An event-related potential (ERP) recording paradigm was employed in which the children were required to attend to and name either pictures or words that were presented individually or in superimposed picture-word arrays that varied in degree of semantic relation. For Group RS, the ERP waves to words, both presented individually or attended in the superimposed array, exhibited reduced N450 amplitude relative to controls, whereas their ERP waves to pictures were normal. This suggests that the word-naming deficiency for Group RS is not a selective attention deficit but rather a specific linguistic deficit that develops at a later stage of processing. In contrast to Group RS and controls, Group A did not exhibit reliable early frontal negative waves (N280) to the super-imposed pictures and words, an effect that may reflect a selective attention deficit for these children that develops at an early stage of visuo-spatial processing. These early processing differences were also evident in smaller amplitude N450 waves for Group A when naming either pictures or words in the superimposed arrays.

  18. The role of verbal and pictorial information in multimodal incidental acquisition of foreign language vocabulary

    OpenAIRE

    Bisson, Marie-Josée; Van Heuven, Walter J.B.; Conklin, Kathy; Tunney, Richard J.

    2014-01-01

    This study used eye tracking to investigate the allocation of attention to multimodal stimuli during an incidental learning situation, as well as its impact on subsequent explicit learning. Participants were exposed to foreign language (FL) auditory words on their own, in conjunction with written native language (NL) translations, or with both written NL translations and pictures. Incidental acquisition of FL words was assessed the following day through an explicit learning task where partici...

  19. Sentence Context and Word-Picture Cued-Recall Paired-Associate Learning Procedure Boosts Recall in Normal and Mild Alzheimer’s Disease Patients

    Directory of Open Access Journals (Sweden)

    Rosario Iodice

    2018-01-01

    Full Text Available Introduction. The aim of this study was to employ the word-picture paradigm to examine the effectiveness of combined pictorial illustrations and sentences as strong contextual cues. The experiment details the performance of word recall in healthy older adults (HOA and mild Alzheimer’s disease (AD. The researchers enhanced the words’ recall with word-picture condition and when the pair was associated with a sentence contextualizing the two items. Method. The sample was composed of 18 HOA and 18 people with mild AD. Participants memorized 15 pairs of words under word-word and word-picture conditions, with and without a sentence context. In the paired-associate test, the first item of the pair was read aloud by participants and used to elicit retrieval of the associated item. Results. The findings suggest that both HOA and mild-AD pictures improved item recall compared to word condition such as sentences which further enabled item recall. Additionally, the HOA group performs better than the mild-AD group in all conditions. Conclusions. Word-picture and sentence context strengthen the encoding in the explicit memory task, both in HOA and mild AD. These results open a potential window to improve the memory for verbalized instructions and restore sequential abilities in everyday life, such as brushing one’s teeth, fastening one’s pants, or drying one’s hands.

  20. The Multimodal Possibilities of Online Instructions

    DEFF Research Database (Denmark)

    Kampf, Constance

    2006-01-01

    The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi-modal analy......The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi...

  1. An analysis of initial acquisition and maintenance of sight words following picture matching and copy cover, and compare teaching methods.

    Science.gov (United States)

    Conley, Colleen M; Derby, K Mark; Roberts-Gwinn, Michelle; Weber, Kimberly P; McLaughlin, T E

    2004-01-01

    This study compared the copy, cover, and compare method to a picture-word matching method for teaching sight word recognition. Participants were 5 kindergarten students with less than preprimer sight word vocabularies who were enrolled in a public school in the Pacific Northwest. A multielement design was used to evaluate the effects of the two interventions. Outcomes suggested that sight words taught using the copy, cover, and compare method resulted in better maintenance of word recognition when compared to the picture-matching intervention. Benefits to students and the practicality of employing the word-level teaching methods are discussed.

  2. Stroop and picture-word interference are two sides of the same coin

    NARCIS (Netherlands)

    van Maanen, Leendert; van Rijn, Hedderik; Borst, Jelmer P.

    2009-01-01

    This article presents a cognitive model that reconciles a surprising observation in the picture-word interference (PWI) paradigm with the general notion that PWI is a form of Stroop interference. Dell'Acqua, Job, Peressotti, and Pascali (2007) assessed PWI using a psychological refractory period

  3. Why does picture naming take longer than word reading? The contribution of articulatory processes.

    Science.gov (United States)

    Riès, Stéphanie; Legou, Thierry; Burle, Borís; Alario, F-Xavier; Malfait, Nicole

    2012-10-01

    Since the 19th century, it has been known that response latencies are longer for naming pictures than for reading words aloud. While several interpretations have been proposed, a common general assumption is that this difference stems from cognitive word-selection processes and not from articulatory processes. Here we show that, contrary to this widely accepted view, articulatory processes are also affected by the task performed. To demonstrate this, we used a procedure that to our knowledge had never been used in research on language processing: response-latency fractionating. Along with vocal onsets, we recorded the electromyographic (EMG) activity of facial muscles while participants named pictures or read words aloud. On the basis of these measures, we were able to fractionate the verbal response latencies into two types of time intervals: premotor times (from stimulus presentation to EMG onset), mostly reflecting cognitive processes, and motor times (from EMG onset to vocal onset), related to motor execution processes. We showed that premotor and motor times are both longer in picture naming than in reading, although than in reading, although articulation is already initiated in the latter measure. Future studies based on this new approach should bring valuable clues for a better understanding of the relation between the cognitive and motor processes involved in speech production.

  4. Imaging When Acting: Picture but Not Word Cues Induce Action-Related Biases of Visual Attention

    Science.gov (United States)

    Wykowska, Agnieszka; Hommel, Bernhard; Schubö, Anna

    2012-01-01

    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters. PMID:23087656

  5. Imaging when acting: picture but not word cues induce action-related biases of visual attention.

    Science.gov (United States)

    Wykowska, Agnieszka; Hommel, Bernhard; Schubö, Anna

    2012-01-01

    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing - an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters.

  6. Objective age of acquisition for 223 Italian words: norms and effects on picture naming speed.

    Science.gov (United States)

    Lotto, Lorella; Surian, Luca; Job, Remo

    2010-02-01

    The present study provides a set of objective age of acquisition (AoA) norms for 223 Italian words that may be useful for conducting cross-linguistic studies or experiments on Italian language processing. The data were collected by presenting children from the ages of 2 to 11 with a normed picture set (Lotto, Dell'Acqua, & Job, 2001). Following the study of Morrison, Chappell, and Ellis (1997), we report two measures of objective AoA. Both measures strongly correlated with each other, and they also showed a good correlation with the rated AoA provided by adult participants. Furthermore, we assessed the relationship between the AoA measures and other variables used in psycholinguistic experiments. Regression analyses showed that familiarity, typicality, and word frequency were significant predictors of AoA. AoA, but not word frequency, was found to determine naming latencies. Finally, we present a path model in which AoA is a mediator in predicting speed in picture naming. The norms and the picture set can also be downloaded from http://dpss.psy.unipd.it/files/strumenti.php and from http://brm.psychonomic-journals.org/content/supplemental.

  7. Reading Words or Pictures: Eye Movement Patterns in Adults and Children Differ by Age Group and Receptive Language Ability.

    Science.gov (United States)

    An, Licong; Wang, Yifang; Sun, Yadong

    2017-01-01

    This study was conducted to explore the differences in the degree of attention given to Chinese print and pictures by children and adults when they read picture books with and without Chinese words. We used an eye tracker from SensoMotoric Instruments to record the visual fixations of the subjects. The results showed that the adults paid more attention to Chinese print and looked at the print sooner than the children did. The stronger the children's receptive language abilities were, the less time it took them to view the pictures. All participants spent the same amount of time looking at the pictures whether Chinese words were present or absent.

  8. Reading Words or Pictures: Eye Movement Patterns in Adults and Children Differ by Age Group and Receptive Language Ability

    Directory of Open Access Journals (Sweden)

    Licong An

    2017-05-01

    Full Text Available This study was conducted to explore the differences in the degree of attention given to Chinese print and pictures by children and adults when they read picture books with and without Chinese words. We used an eye tracker from SensoMotoric Instruments to record the visual fixations of the subjects. The results showed that the adults paid more attention to Chinese print and looked at the print sooner than the children did. The stronger the children’s receptive language abilities were, the less time it took them to view the pictures. All participants spent the same amount of time looking at the pictures whether Chinese words were present or absent.

  9. Could a multimodal dictionary serve as a learning tool? An examination of the impact of technologically enhanced visual glosses on L2 text comprehension

    Directory of Open Access Journals (Sweden)

    Takeshi Sato

    2016-09-01

    Full Text Available This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it facilitates incidental word retention. This study explores other potentials of multimodal L2 vocabulary learning: explicit learning with a multimodal dictionary could enhance not only word retention, but also text comprehension; the dictionary could serve not only as a reference tool, but also as a learning tool; and technology-enhanced visual glosses could facilitate deeper text comprehension. To verify these claims, this study investigates the multimodal representations’ effects on Japanese students learning L2 locative prepositions by developing two online dictionaries, one with static pictures and one with animations. The findings show the advantage of such dictionaries in explicit learning; however, no significant differences are found between the two types of visual glosses, either in the vocabulary or in the listening tests. This study confirms the effectiveness of multimodal L2 materials, but also emphasizes the need for further research into making the technologically enhanced materials more effective.

  10. The Role of Repeated Exposure to Multimodal Input in Incidental Acquisition of Foreign Language Vocabulary.

    Science.gov (United States)

    Bisson, Marie-Josée; van Heuven, Walter J B; Conklin, Kathy; Tunney, Richard J

    2014-12-01

    Prior research has reported incidental vocabulary acquisition with complete beginners in a foreign language (FL), within 8 exposures to auditory and written FL word forms presented with a picture depicting their meaning. However, important questions remain about whether acquisition occurs with fewer exposures to FL words in a multimodal situation and whether there is a repeated exposure effect. Here we report a study where the number of exposures to FL words in an incidental learning phase varied between 2, 4, 6, and 8 exposures. Following the incidental learning phase, participants completed an explicit learning task where they learned to recognize written translation equivalents of auditory FL word forms, half of which had occurred in the incidental learning phase. The results showed that participants performed better on the words they had previously been exposed to, and that this incidental learning effect occurred from as little as 2 exposures to the multimodal stimuli. In addition, repeated exposure to the stimuli was found to have a larger impact on learning during the first few exposures and decrease thereafter, suggesting that the effects of repeated exposure on vocabulary acquisition are not necessarily constant.

  11. Short-Term Free Recall and Sequential Memory for Pictures and Words: A Simultaneous-Successive Processing Interpretation.

    Science.gov (United States)

    Randhawa, Bikkar S.; And Others

    1982-01-01

    Replications of two basic experiments in support of the dual-coding processing model with grade 10 and college subjects used pictures, concrete words, and abstract words as stimuli presented at fast and slow rates for immediate and sequential recall. Results seem to be consistent with predictions of simultaneous-successive cognitive theory. (MBR)

  12. Evidence for similar patterns of neural activity elicited by picture- and word-based representations of natural scenes.

    Science.gov (United States)

    Kumar, Manoj; Federmeier, Kara D; Fei-Fei, Li; Beck, Diane M

    2017-07-15

    A long-standing core question in cognitive science is whether different modalities and representation types (pictures, words, sounds, etc.) access a common store of semantic information. Although different input types have been shown to activate a shared network of brain regions, this does not necessitate that there is a common representation, as the neurons in these regions could still differentially process the different modalities. However, multi-voxel pattern analysis can be used to assess whether, e.g., pictures and words evoke a similar pattern of activity, such that the patterns that separate categories in one modality transfer to the other. Prior work using this method has found support for a common code, but has two limitations: they have either only examined disparate categories (e.g. animals vs. tools) that are known to activate different brain regions, raising the possibility that the pattern separation and inferred similarity reflects only large scale differences between the categories or they have been limited to individual object representations. By using natural scene categories, we not only extend the current literature on cross-modal representations beyond objects, but also, because natural scene categories activate a common set of brain regions, we identify a more fine-grained (i.e. higher spatial resolution) common representation. Specifically, we studied picture- and word-based representations of natural scene stimuli from four different categories: beaches, cities, highways, and mountains. Participants passively viewed blocks of either phrases (e.g. "sandy beach") describing scenes or photographs from those same scene categories. To determine whether the phrases and pictures evoke a common code, we asked whether a classifier trained on one stimulus type (e.g. phrase stimuli) would transfer (i.e. cross-decode) to the other stimulus type (e.g. picture stimuli). The analysis revealed cross-decoding in the occipitotemporal, posterior parietal and

  13. Sentence Context and Word-Picture Cued-Recall Paired-Associate Learning Procedure Boosts Recall in Normal and Mild Alzheimer’s Disease Patients

    OpenAIRE

    Iodice, Rosario; Meilán, Juan José García; Ramos, Juan Carro; Small, Jeff A.

    2018-01-01

    Introduction. The aim of this study was to employ the word-picture paradigm to examine the effectiveness of combined pictorial illustrations and sentences as strong contextual cues. The experiment details the performance of word recall in healthy older adults (HOA) and mild Alzheimer’s disease (AD). The researchers enhanced the words’ recall with word-picture condition and when the pair was associated with a sentence contextualizing the two items. Method. The sample was composed of 18 HOA and...

  14. Is a Picture Worth a Thousand Words? Using Images to Create a Concreteness Effect for Abstract Words: Evidence from Beginning L2 Learners of Spanish

    Science.gov (United States)

    Farley, Andrew; Pahom, Olga; Ramonda, Kris

    2014-01-01

    This study examines the lexical representation and recall of abstract words by beginning L2 learners of Spanish in the light of the predictions of the dual coding theory (Paivio 1971; Paivio and Desrochers 1980). Ninety-seven learners (forty-four males and fifty-three females) were randomly placed in the picture or non-picture group and taught…

  15. Pictures with narration versus pictures with on-screen text during teaching Mathematics

    Directory of Open Access Journals (Sweden)

    Panagiotis Ioannou

    2017-06-01

    Full Text Available The purpose of the present study was to compare the effects of two different teaching methods on students’ comprehension in Mathematics: pictures with concurrent narration versus pictures with on-screen text, during teaching triangles, a lesson in Mathematics. Forty primary school children (boys and girls selected to participate in this study. Students splitted into two experimental groups with the technique of simple random sampling. The first group consisted of students who viewed and listened (pictures with narration group, while the second group consisted of students who viewed (pictures with on-screen text a presentation of triangles. A recall test was used to evaluate students’ comprehension. The results showed that students’ comprehension was better when triangles' presentation (pictures was accompanied with spoken words, than with printed words. The pictures with narration group performed better than the pictures with on-screen text group, in recall test (M = 4.97, SD = 1.32 p<0.01. Results are consistent with the modality principle in which learners are more likely to build connections between corresponding words and pictures when words are presented in a spoken form (narration simultaneously with pictures.

  16. Short-term retention of pictures and words as a function of type of distraction and length of delay interval.

    Science.gov (United States)

    Pellegrino, J W; Siegel, A W; Dhawan, M

    1976-01-01

    Picture and word triads were tested in a Brown-Peterson short-term retention task at varying delay intervals (3, 10, or 30 sec) and under acoustic and simultaneous acoustic and visual distraction. Pictures were superior to words at all delay intervals under single acoustic distraction. Dual distraction consistently reduced picture retention while simultaneously facilitating word retention. The results were interpreted in terms of the dual coding hypothesis with modality-specific interference effects in the visual and acoustic processing systems. The differential effects of dual distraction were related to the introduction of visual interference and differential levels of functional acoustic interference across dual and single distraction tasks. The latter was supported by a constant 2/1 ratio in the backward counting rates of the acoustic vs. dual distraction tasks. The results further suggest that retention may not depend on total processing load of the distraction task, per se, but rather that processing load operates within modalities.

  17. Conceptual control across modalities: graded specialisation for pictures and words in inferior frontal and posterior temporal cortex

    OpenAIRE

    Krieger-Redwood, Katya; Teige, Catarina; Davey, James; Hymers, Mark; Jefferies, Elizabeth

    2015-01-01

    Controlled semantic retrieval to words elicits co-activation of inferior frontal (IFG) and left posterior temporal cortex (pMTG), but research has not yet established (i) the distinct contributions of these regions or (ii) whether the same processes are recruited for non-verbal stimuli. Words have relatively flexible meanings – as a consequence, identifying the context that links two specific words is relatively demanding. In contrast, pictures are richer stimuli and their precise meaning is ...

  18. Negativity is the main cause of reaction-time delay in an emotional Stroop study with picture/word stimuli.

    NARCIS (Netherlands)

    Sutmuller, A.D.; Brokken, D.

    2008-01-01

    The aim of this study was to find out if the emotion triggered by viewing a picture can be determined by measuring reaction times. We investigated this by using the emotional Stroop task. Emotional Stroop entails presenting two stimuli, in our case pictures and superimposed words, with different

  19. Text-Picture Relations in Cooking Instructions

    NARCIS (Netherlands)

    van der Sluis, Ielka; Leito, Shadira; Redeker, Gisela; Bunt, Harry

    2016-01-01

    Like many other instructions, recipes on packages with ready-to-use ingredients for a dish combine a series of pictures with short text paragraphs. The information presentation in such multimodal instructions can be compact (either text or picture) and/or cohesive (text and picture). In an

  20. The Effective Use of Symbols in Teaching Word Recognition to Children with Severe Learning Difficulties: A Comparison of Word Alone, Integrated Picture Cueing and the Handle Technique.

    Science.gov (United States)

    Sheehy, Kieron

    2002-01-01

    A comparison is made between a new technique (the Handle Technique), Integrated Picture Cueing, and a Word Alone Method. Results show using a new combination of teaching strategies enabled logographic symbols to be used effectively in teaching word recognition to 12 children with severe learning difficulties. (Contains references.) (Author/CR)

  1. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  2. Naming and categorizing objects: task differences modulate the polarity of semantic effects in the picture-word interference paradigm.

    Science.gov (United States)

    Hantsch, Ansgar; Jescheniak, Jörg D; Mädebach, Andreas

    2012-07-01

    The picture-word interference paradigm is a prominent tool for studying lexical retrieval during speech production. When participants name the pictures, interference from semantically related distractor words has regularly been shown. By contrast, when participants categorize the pictures, facilitation from semantically related distractors has typically been found. In the extant studies, however, differences in the task instructions (naming vs. categorizing) were confounded with the response level: While responses in naming were typically located at the basic level (e.g., "dog"), responses were located at the superordinate level in categorization (e.g., "animal"). The present study avoided this confound by having participants respond at the basic level in both naming and categorization, using the same pictures, distractors, and verbal responses. Our findings confirm the polarity reversal of the semantic effects--that is, semantic interference in naming, and semantic facilitation in categorization. These findings show that the polarity reversal of the semantic effect is indeed due to the different tasks and is not an artifact of the different response levels used in previous studies. Implications for current models of language production are discussed.

  3. Selective activation around the left occipito-temporal sulcus for words relative to pictures: Individual variability or false positives?

    NARCIS (Netherlands)

    Wright, Nicholas D.; Mechelli, Andrea; Noppeney, Uta; Veltman, Dick J.; Rombouts, Serge A. R. B.; Glensman, Janice; Haynes, John-Dylan; Price, Cathy J.

    2008-01-01

    We used high-resolution fMRI to investigate claims that learning to read r !sults in greater left occipito-temporal (OT) activation for written words relative to pictures of objects. In tl e first experiment, 9/16 subjects performing a one-back task showed activation in >= 1 left OT voxel for word:

  4. Memory for pictograms, pictures, and words separately and all mixed up.

    Science.gov (United States)

    Haber, R N; Myers, B L

    1982-01-01

    Pictograms were created in which the outline of a work denoting an object was shaped to be the same as the object itself. A number of objects were presented, some drawn as pictograms, some as outline shapes, and some as normally printed words. The experiment was designed to test if recognition memory was superior for the pictograms as compared to outline pictures or words, and if this would be true whether the subjects were asked to attend to the form or only the content of the stimuli. One group of subjects was trained to respond OLD only if the test item was the same object in the same form, and NEW only to objects never before shown in any form. Recognition accuracy (a signal detection analysis) was greatest for the pictograms, and poorest for the words in both groups. Though the subjects could disregard form, they were most accurate when probed with the same form as presented. But in all comparisons subjects were most accurate when forced to recall both the form and the content. These and other results were taken to be mildly supportive of a dual coding hypothesis, and of the utility of these new stimuli.

  5. Task choice and semantic interference in picture naming.

    Science.gov (United States)

    Piai, Vitória; Roelofs, Ardi; Schriefers, Herbert

    2015-05-01

    Evidence from dual-task performance indicates that speakers prefer not to select simultaneous responses in picture naming and another unrelated task, suggesting a response selection bottleneck in naming. In particular, when participants respond to tones with a manual response and name pictures with superimposed semantically related or unrelated distractor words, semantic interference in naming tends to be constant across stimulus onset asynchronies (SOAs) between the tone stimulus and the picture-word stimulus. In the present study, we examine whether semantic interference in picture naming depends on SOA in case of a task choice (naming the picture vs reading the word of a picture-word stimulus) based on tones. This situation requires concurrent processing of the tone stimulus and the picture-word stimulus, but not a manual response to the tones. On each trial, participants either named a picture or read aloud a word depending on the pitch of a tone, which was presented simultaneously with picture-word onset or 350 ms or 1000 ms before picture-word onset. Semantic interference was present with tone pre-exposure, but absent when tone and picture-word stimulus were presented simultaneously. Against the background of the available studies, these results support an account according to which speakers tend to avoid concurrent response selection, but can engage in other types of concurrent processing, such as task choices. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    Science.gov (United States)

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural memory performance. These data provided strong support for the dual-coding model.

  7. The words children hear: Picture books and the statistics for language learning

    OpenAIRE

    Montag, Jessica L.; Jones, Michael N.; Smith, Linda B.

    2015-01-01

    Young children learn language from the speech they hear. Previous work suggests that the statistical diversity of words and of linguistic contexts is associated with better language outcomes. One potential source of lexical diversity is the text of picture books that caregivers read aloud to children. Many parents begin reading to their children shortly after birth, so this is potentially an important source of linguistic input for many children. We constructed a corpus of 100 children’s pict...

  8. An analysis of initial acquisition and maintenance of sight words following picture matching and copy cover, and compare teaching methods.

    OpenAIRE

    Conley, Colleen M; Derby, K Mark; Roberts-Gwinn, Michelle; Weber, Kimberly P; McLaughlin, T E

    2004-01-01

    This study compared the copy, cover, and compare method to a picture-word matching method for teaching sight word recognition. Participants were 5 kindergarten students with less than preprimer sight word vocabularies who were enrolled in a public school in the Pacific Northwest. A multielement design was used to evaluate the effects of the two interventions. Outcomes suggested that sight words taught using the copy, cover, and compare method resulted in better maintenance of word recognition...

  9. Emotional pictures and sounds: A review of multimodal interactions of emotion cues in multiple domains

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2014-12-01

    Full Text Available In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.

  10. rTMS on left prefrontal cortex contributes to memories for positive emotional cues: a comparison between pictures and words.

    Science.gov (United States)

    Balconi, M; Cobelli, C

    2015-02-26

    The present research explored the cortical correlates of emotional memories in response to words and pictures. Subjects' performance (Accuracy Index, AI; response times, RTs; RTs/AI) was considered when a repetitive Transcranial Magnetic Stimulation (rTMS) was applied on the left dorsolateral prefrontal cortex (LDLPFC). Specifically, the role of LDLPFC was tested by performing a memory task, in which old (previously encoded targets) and new (previously not encoded distractors) emotional pictures/words had to be recognized. Valence (positive vs. negative) and arousing power (high vs. low) of stimuli were also modulated. Moreover, subjective evaluation of emotional stimuli in terms of valence/arousal was explored. We found significant performance improving (higher AI, reduced RTs, improved general performance) in response to rTMS. This "better recognition effect" was only related to specific emotional features, that is positive high arousal pictures or words. Moreover no significant differences were found between stimulus categories. A direct relationship was also observed between subjective evaluation of emotional cues and memory performance when rTMS was applied to LDLPFC. Supported by valence and approach model of emotions, we supposed that a left lateralized prefrontal system may induce a better recognition of positive high arousal words, and that evaluation of emotional cue is related to prefrontal activation, affecting the recognition memories of emotions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Brief Report: Generalisation of Word-Picture Relations in Children with Autism and Typically Developing Children

    Science.gov (United States)

    Hartley, Calum; Allen, Melissa L.

    2014-01-01

    We investigated whether low-functioning children with autism generalise labels from colour photographs based on sameness of shape, colour, or both. Children with autism and language-matched controls were taught novel words paired with photographs of unfamiliar objects, and then sorted pictures and objects into two buckets according to whether or…

  12. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  13. Incidental acquisition of foreign language vocabulary through brief multi-modal exposure.

    Science.gov (United States)

    Bisson, Marie-Josée; van Heuven, Walter J B; Conklin, Kathy; Tunney, Richard J

    2013-01-01

    First language acquisition requires relatively little effort compared to foreign language acquisition and happens more naturally through informal learning. Informal exposure can also benefit foreign language learning, although evidence for this has been limited to speech perception and production. An important question is whether informal exposure to spoken foreign language also leads to vocabulary learning through the creation of form-meaning links. Here we tested the impact of exposure to foreign language words presented with pictures in an incidental learning phase on subsequent explicit foreign language learning. In the explicit learning phase, we asked adults to learn translation equivalents of foreign language words, some of which had appeared in the incidental learning phase. Results revealed rapid learning of the foreign language words in the incidental learning phase showing that informal exposure to multi-modal foreign language leads to foreign language vocabulary acquisition. The creation of form-meaning links during the incidental learning phase is discussed.

  14. Use of Syllabic Logograms to Help Dyslexic Readers of English Visualize Abstract Words as Pictures

    Science.gov (United States)

    Saez-Rodriguez, Alberto

    2009-01-01

    Background: Dyslexics read concrete words better than abstract ones. As a result, one of the major problems facing dyslexics is the fact that only part of the information that they require to communicate is concrete, i.e. can easily be pictured. Method: The experiment involved dyslexic third-grade, English-speaking children (8-year-olds) divided…

  15. The properties of retrieval cues constrain the picture superiority effect.

    Science.gov (United States)

    Weldon, M S; Roediger, H L; Challis, B H

    1989-01-01

    In three experiments, we examined why pictures are remembered better than words on explicit memory tests like recall and recognition, whereas words produce more priming than pictures on some implicit tests, such as word-fragment and word-stem completion (e.g., completing -l-ph-nt or ele----- as elephant). One possibility is that pictures are always more accessible than words if subjects are given explicit retrieval instructions. An alternative possibility is that the properties of the retrieval cues themselves constrain the retrieval processes engaged; word fragments might induce data-driven (perceptually based) retrieval, which favors words regardless of the retrieval instructions. Experiment 1 demonstrated that words were remembered better than pictures on both the word-fragment and word-stem completion tasks under both implicit and explicit retrieval conditions. In Experiment 2, pictures were recalled better than words with semantically related extralist cues. In Experiment 3, when semantic cues were combined with word fragments, pictures and words were recalled equally well under explicit retrieval conditions, but words were superior to pictures under implicit instructions. Thus, the inherently data-limited properties of fragmented words limit their use in accessing conceptual codes. Overall, the results indicate that retrieval operations are largely determined by properties of the retrieval cues under both implicit and explicit retrieval conditions.

  16. THE EFFECT OF MIND MAPPING WITH PICTURE WORD CARDS TOWARD THE ABILITY OF EARLY READING FOR A HARD OF HEARING STUDENT

    Directory of Open Access Journals (Sweden)

    Nurika Miftakul Janah

    2016-12-01

    Full Text Available A Student with hard of hearing hasa limited vocabulary and difficulty understanding abstract words. The purposes of this research were to describe: (1 the ability of early reading for a hard of hearing student at the time before the intervention, (2 the ability of early reading for a hard of hearing student after the intervention, and (3 the effect of mind mapping with picture word card toward the ability of early reading for a hard of hearing student in the class I. This study used a single subject research (SSR with A-B-A design. These results indicated that there was a positive effect of the mind mapping with picture word card toward the ability of early reading for a hard of hearing student in the class I.

  17. Aging memory for pictures: Using high-density event-related potentials to understand the effect of aging on the picture superiority effect

    OpenAIRE

    Ally, Brandon A.; Waring, Jill D.; Beth, Ellen H.; McKeever, Joshua D.; Milberg, William P.; Budson, Andrew E.

    2007-01-01

    High-density event-related potentials (ERPs) were used to understand the effect of aging on the neural correlates of the picture superiority effect. Pictures and words were systematically varied at study and test while ERPs were recorded at retrieval. Here, the results of the word-word and picture-picture study-test conditions are presented. Behavioral results showed that older adults demonstrated the picture superiority effect to a greater extent than younger adults. The ERP data helped to e...

  18. Pictures, images, and recollective experience.

    Science.gov (United States)

    Dewhurst, S A; Conway, M A

    1994-09-01

    Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.

  19. The Effects of Specific and Categorical Orienting on Children's Incidental and Intentional Memory for Pictures and Words.

    Science.gov (United States)

    Ackerman, Brian P.

    1985-01-01

    Second-graders, fifth-graders, and adults participated in an experiment of cued recall for cue-target picture and word pairs. Results suggested that differences in the encoding of both specific and categorical attribute information contribute to developmental recall differences independently of encoding intent and stimulus modality. (Author/CB)

  20. A Multimodal Discourse Analysis of Advertisements-Based on Visual Grammar

    Directory of Open Access Journals (Sweden)

    Fang Guo

    2017-03-01

    Full Text Available In addition to words, the symbols, colors, sculptures, photographs, music, etc. are also frequently employed by participants to express themselves in communication. Advertising is closely related to sounds, colors, picture animations and other symbols. This paper aims to present how semiotics acts effectively to realize the real business purpose to reflect the unique significance of the multimodal discourse analysis. Based on Visual Grammar, this paper analyzes the 2014 Brazil World Cup advertisements from the perspective of representational meaning, interactive meaning and compositional meaning, this research means to prove that different modes within an advertisement depend on each other and have an interdependent relationship. And these relationships have different roles in different contexts.

  1. Investigating the flow of information during speaking: the impact of morpho-phonological, associative, and categorical picture distractors on picture naming.

    Science.gov (United States)

    Bölte, Jens; Böhl, Andrea; Dobel, Christian; Zwitserlood, Pienie

    2015-01-01

    In three experiments, participants named target pictures by means of German compound words (e.g., Gartenstuhl-garden chair), each accompanied by two different distractor pictures (e.g., lawn mower and swimming pool). Targets and distractor pictures were semantically related either associatively (garden chair and lawn mower) or by a shared semantic category (garden chair and wardrobe). Within each type of semantic relation, target and distractor pictures either shared morpho-phonological (word-form) information (Gartenstuhl with Gartenzwerg, garden gnome, and Gartenschlauch, garden hose) or not. A condition with two completely unrelated pictures served as baseline. Target naming was facilitated when distractor and target pictures were morpho-phonologically related. This is clear evidence for the activation of word-form information of distractor pictures. Effects were larger for associatively than for categorically related distractors and targets, which constitute evidence for lexical competition. Mere categorical relatedness, in the absence of morpho-phonological overlap, resulted in null effects (Experiments 1 and 2), and only speeded target naming when effects reflect only conceptual, but not lexical processing (Experiment 3). Given that distractor pictures activate their word forms, the data cannot be easily reconciled with discrete serial models. The results fit well with models that allow information to cascade forward from conceptual to word-form levels.

  2. The picture superiority effect in associative recognition.

    Science.gov (United States)

    Hockley, William E

    2008-10-01

    The picture superiority effect has been well documented in tests of item recognition and recall. The present study shows that the picture superiority effect extends to associative recognition. In three experiments, students studied lists consisting of random pairs of concrete words and pairs of line drawings; then they discriminated between intact (old) and rearranged (new) pairs of words and pictures at test. The discrimination advantage for pictures over words was seen in a greater hit rate for intact picture pairs, but there was no difference in the false alarm rates for the two types of stimuli. That is, there was no mirror effect. The same pattern of results was found when the test pairs consisted of the verbal labels of the pictures shown at study (Experiment 4), indicating that the hit rate advantage for picture pairs represents an encoding benefit. The results have implications for theories of the picture superiority effect and models of associative recognition.

  3. Animates are better remembered than inanimates: further evidence from word and picture stimuli.

    Science.gov (United States)

    Bonin, Patrick; Gelin, Margaux; Bugaiska, Aurélia

    2014-04-01

    In three experiments, we showed that animate entities are remembered better than inanimate entities. Experiment 1 revealed better recall for words denoting animate than inanimate items. Experiment 2 replicated this finding with the use of pictures. In Experiment 3, we found better recognition for animate than for inanimate words. Importantly, we also found a higher recall rate of “remember” than of “know” responses for animates, whereas the recall rates were similar for the two types of responses for inanimate items. This finding suggests that animacy enhances not only the quantity but also the quality of memory traces, through the recall of contextual details of previous experiences (i.e., episodic memory). Finally, in Experiment 4, we tested whether the animacy effect was due to animate items being richer in terms of sensory features than inanimate items. The findings provide further evidence for the functionalist view of memory championed by Nairne and coworkers (Nairne, 2010; Nairne & Pandeirada, Cognitive Psychology, 61 :1–22, 2010a, 2010b).

  4. Aging memory for pictures: using high-density event-related potentials to understand the effect of aging on the picture superiority effect.

    Science.gov (United States)

    Ally, Brandon A; Waring, Jill D; Beth, Ellen H; McKeever, Joshua D; Milberg, William P; Budson, Andrew E

    2008-01-31

    High-density event-related potentials (ERPs) were used to understand the effect of aging on the neural correlates of the picture superiority effect. Pictures and words were systematically varied at study and test while ERPs were recorded at retrieval. Here, the results of the word-word and picture-picture study-test conditions are presented. Behavioral results showed that older adults demonstrated the picture superiority effect to a greater extent than younger adults. The ERP data helped to explain these findings. The early frontal effect, parietal effect, and late frontal effect were all indistinguishable between older and younger adults for pictures. In contrast, for words, the early frontal and parietal effects were significantly diminished for the older adults compared to the younger adults. These two old/new effects have been linked to familiarity and recollection, respectively, and the authors speculate that these processes are impaired for word-based memory in the course of healthy aging. The findings of this study suggest that pictures allow older adults to compensate for their impaired memorial processes, and may allow these memorial components to function more effectively in older adults.

  5. Priming effect on word reading and recall

    OpenAIRE

    Faria, Isabel Hub; Luegi, Paula

    2008-01-01

    This study focuses on priming as a function of exposure to bimodal stimuli of European Portuguese screen centred single words and isolated pictures inserted at the screen’s right upper corner, with four kinds of word-picture relation. The eye movements of 18 Portuguese native university students were registered while reading four sets of ten word-picture pairs, and their respective oral recall lists of words or pictures were kept. The results reveal a higher phonological primin...

  6. Reading Words or Pictures: Eye Movement Patterns in Adults and Children Differ by Age Group and Receptive Language Ability

    OpenAIRE

    An, Licong; Wang, Yifang; Sun, Yadong

    2017-01-01

    This study was conducted to explore the differences in the degree of attention given to Chinese print and pictures by children and adults when they read picture books with and without Chinese words. We used an eye tracker from SensoMotoric Instruments to record the visual fixations of the subjects. The results showed that the adults paid more attention to Chinese print and looked at the print sooner than the children did. The stronger the children’s receptive language abilities were, the less...

  7. A multimodal parallel architecture: A cognitive framework for multimodal interactions.

    Science.gov (United States)

    Cohn, Neil

    2016-01-01

    Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.

  8. Highcrop picture tool

    OpenAIRE

    Fog, Erik

    2013-01-01

    Pictures give other impulses than words and numbers. With images, you can easily spot new opportunities. The Highcrop-tool allows for optimization of the organic arable farm based on picture-cards. The picture-cards are designed to make it easier and more inspiring to go close to the details of production. By using the picture-cards you can spot the areas, where there is a possibility to optimize the production system for better results in the future. Highcrop picture cards can be used to:...

  9. The picture superiority effect in a cross-modality recognition task.

    Science.gov (United States)

    Stenbert, G; Radeborg, K; Hedman, L R

    1995-07-01

    Words and pictures were studied and recognition tests given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Experiment 3 added a manipulation of instructions to name studied objects, and Experiment 4 deviated from the others by presenting both picture and word referring to the same object together for study. The results showed that congruence between study and test modalities consistently facilitated recognition. Furthermore, items studied as pictures were more rapidly recognized than were items studied as words. With repeated testing, the second instance was affected by its predecessor, but the facilitating effect of picture-to-word priming exceeded that of word-to-picture priming. The finds suggest a two- stage recognition process, in which the first is based on perceptual familiarity and the second uses semantic links for a retrieval search. Common-code theories that grant privileged access to the semantic code for pictures or, alternatively, dual-code theories that assume mnemonic superiority for the image code are supported by the findings. Explanations of the picture superiority effect as resulting from dual encoding of pictures are not supported by the data.

  10. From Perception to Recognition Memory: Time Course and Lateralization of Neural Substrates of Word and Abstract Picture Processing

    Science.gov (United States)

    Maillard, Louis; Barbeau, Emmanuel J.; Baumann, Cedric; Koessler, Laurent; Benar, Christian; Chauvel, Patrick; Liegeois-Chauvel, Catherine

    2011-01-01

    Through study of clinical cases with brain lesions as well as neuroimaging studies of cognitive processing of words and pictures, it has been established that material-specific hemispheric specialization exists. It remains however unclear whether such specialization holds true for all processes involved in complex tasks, such as recognition…

  11. Multimodal follow-up questions to multimodal answers in a QA system

    NARCIS (Netherlands)

    van Schooten, B.W.; op den Akker, Hendrikus J.A.

    2007-01-01

    We are developing a dialogue manager (DM) for a multimodal interactive Question Answering (QA) system. Our QA system presents answers using text and pictures, and the user may pose follow-up questions using text or speech, while indicating screen elements with the mouse. We developed a corpus of

  12. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    Science.gov (United States)

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  13. The picture superiority effect: support for the distinctiveness model.

    Science.gov (United States)

    Mintzer, M Z; Snodgrass, J G

    1999-01-01

    The form change paradigm was used to explore the basis for the picture superiority effect. Recognition memory for studied pictures and words was tested in their study form or the alternate form. Form change cost was defined as the difference between recognition performance for same and different form items. Based on the results of Experiment 1 and previous studies, it was difficult to determine the relative cost for studied pictures and words due to a reversal of the mirror effect. We hypothesized that the reversed mirror effect results from subjects' basing their recognition decisions on their assumptions about the study form. Experiments 2 and 3 confirmed this hypothesis and generated a method for evaluating the relative cost for pictures and words despite the reversed mirror effect. More cost was observed for pictures than words, supporting the distinctiveness model of the picture superiority effect.

  14. Examining lateralized semantic access using pictures.

    Science.gov (United States)

    Lovseth, Kyle; Atchley, Ruth Ann

    2010-03-01

    A divided visual field (DVF) experiment examined the semantic processing strategies employed by the cerebral hemispheres to determine if strategies observed with written word stimuli generalize to other media for communicating semantic information. We employed picture stimuli and vary the degree of semantic relatedness between the picture pairs. Participants made an on-line semantic relatedness judgment in response to sequentially presented pictures. We found that when pictures are presented to the right hemisphere responses are generally more accurate than the left hemisphere for semantic relatedness judgments for picture pairs. Furthermore, consistent with earlier DVF studies employing words, we conclude that the RH is better at accessing or maintaining access to information that has a weak or more remote semantic relationship. We also found evidence of faster access for pictures presented to the LH in the strongly-related condition. Overall, these results are consistent with earlier DVF word studies that argue that the cerebral hemispheres each play an important and separable role during semantic retrieval. Copyright 2009 Elsevier Inc. All rights reserved.

  15. Multimodal versus Unimodal Instruction in a Complex Learning Context.

    Science.gov (United States)

    Gellevij, Mark; van der Meij, Hans; de Jong, Ton; Pieters, Jules

    2002-01-01

    Compared multimodal instruction with text and pictures with unimodal text-only instruction as 44 college students used a visual or textual manual to learn a complex software application. Results initially support dual coding theory and indicate that multimodal instruction led to better performance than unimodal instruction. (SLD)

  16. Investigating the flow of information during speaking: The impact of morpho-phonological, associative and categorical picture distractors on picture naming

    Directory of Open Access Journals (Sweden)

    Jens eBölte

    2015-10-01

    Full Text Available In three experiments, participants named target pictures by means of German compound words (e.g., Gartenstuhl - garden chair, each accompanied by two different distractor pictures (e.g., lawn mower and swimming pool.Targets and distractor pictures were semantically related, either associatively (garden chair and lawn mower or by a shared semantic category (garden chair and wardrobe. Within each type of semantic relation, target and distractor pictures either shared morpho-phonological (word-form information (Gartenstuhl with Gartenzwerg, garden gnome, and Gartenschlauch, garden hose or not. A condition with two completely unrelated pictures served as baseline. Target naming was facilitated when distractor and target pictures were morpho-phonologically related. This is clear evidence for the activation of lexical information of distractor pictures. Effects were larger for associatively than for categorically related distractors and targets, which constitutes evidence for lexical competition. Mere categorical relatedness, in the absence of morpho-phonological overlap, resulted in null effects (Experiments 1 and 2, and only speeded target naming when effects reflect only conceptual, not lexical processing (Experiment 3. Given that distractor pictures activate their word forms, the data cannot be easily reconciled with discrete serial models. The results fit well with models that allow information to cascade forward from conceptual to word-form levels.

  17. Mechanisms of masked evaluative priming: task sets modulate behavioral and electrophysiological priming for picture and words differentially

    OpenAIRE

    Kiefer, Markus; Liegel, Nathalie; Zovko, Monika; Wentura, Dirk

    2016-01-01

    Research with the evaluative priming paradigm has shown that affective evaluation processes reliably influence cognition and behavior, even when triggered outside awareness. However, the precise mechanisms underlying such subliminal evaluative priming effects, response activation vs semantic processing, are matter of a debate. In this study, we determined the relative contribution of semantic processing and response activation to masked evaluative priming with pictures and words. To this end,...

  18. Three-to Four-Year-Olds' Recognition That Symbols Have a Stable Meaning: Pictures Are Understood Before Written Words

    Science.gov (United States)

    Apperly, Ian. A.; Williams, Emily; Williams, Joelle

    2004-01-01

    In 4 experiments 120 three-to four-year-old non readers were asked the identity of a symbolic representation as it appeared with different objects. Consistent with Bialystok (2000), many children judged the identity of written words to vary according to the object with which they appeared but few made such errors with recognizable pictures.…

  19. Picture archiving and communications system EFPACS series

    International Nuclear Information System (INIS)

    Hirasawa, Teiji; Mukasa, Minoru; Hiramatsu, Jun-ichi

    1989-01-01

    Fuji EFPACS (Effective Fuji PACS) is a picture archiving and communications system which efficiently executes centralized management of quantities of image data produced in a hospital. Main features of this system are high-speed retrieval and display function resulting from high-grade imaging technology. This system also strongly supports picture management for multi-modalities, picture storage, and education. EFPACS-500 and EFPACS-1000 series are available according to a system scale. An optimal system configuration can be obtained in a building-up style. This paper describes the features and performance of the EFPACS. (author)

  20. Can pictures speak a thousand words in understanding climate change?

    Science.gov (United States)

    Walton, P.

    2017-12-01

    Pictures are able to engage, inspire and educate people in a way that the spoken or written word cannot, and with 21st Century technology we now have even more ways to present images. Researchers and campaigners working in climate change have used the power of images to great effect, bringing the issue of a warming planet into stark relief through iconic scenes such as the forlorn polar bear adrift on an iceberg. Whilst undeniably successful, this image has now become passé and invisible necessitating the scientific community to identify new ways to engage and educate the general public. This paper reports on a new high resolution visualisation app that has been developed by the European Space Agency to illustrate the change over time of a number of climate variables. Data, collected via satellite Earth observations, have been rendered into visually stunning animations that can be interrogated in a number of ways to allow the user to understand the spatial and temporal changes of that variable. But is it enough? Can it ever be that all that glisters really is gold?

  1. Young toddlers' word comprehension is flexible and efficient.

    Directory of Open Access Journals (Sweden)

    Elika Bergelson

    Full Text Available Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children's gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers' spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called "cup" or "juice" and pictures that had only one likely name for toddlers (such as "apple", using a visual world eye-tracking task and a picture-labeling task (n = 77, mean age, 21 months. Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents.

  2. Memory for Pictorial Information and the Picture Superiority Effect.

    Science.gov (United States)

    Maisto, Albert A.; Queen, Debbie Elaine

    1992-01-01

    The performance of 53 younger adults (mean age 20.7) and 52 older adults (mean age 68.3) was compared in a memory task involving pictures, words, and pictures-plus-words. Results showed (1) significantly higher recall scores for younger adults; (2) equivalent picture superiority effect for both groups; and (3) decline in older adults' performance…

  3. Revisiting the picture-superiority effect in symbolic comparisons: do pictures provide privileged access?

    Science.gov (United States)

    Amrhein, Paul C; McDaniel, Mark A; Waddill, Paula

    2002-09-01

    In 4 experiments, symbolic comparisons were investigated to test semantic-memory retrieval accounts espousing processing advantages for picture over word stimuli. In Experiment 1, participants judged pairs of animal names or pictures by responding to questions probing concrete or abstract attributes (texture or size, ferocity or intelligence). Per pair, attributes were salient or nonsalient concerning their prerated relevance to animals being compared. Distance (near or far) between attribute magnitudes was also varied. Pictures did not significantly speed responding relative to words across all other variables. Advantages were found forfar attribute magnitudes (i.e., the distance effect) and salient attributes. The distance effect was much less for salient than nonsalient concrete-attribute comparisons. These results were consistently found in additional experiments with increased statistical power to detect modality effects. Our findings argue against dual-coding and some common-code accounts of conceptual attribute processing, urging reexamination of the assumption that pictures confer privileged access to long-term knowledge.

  4. Resolving Semantic Interference during Word Production Requires Central Attention

    Science.gov (United States)

    Kleinman, Daniel

    2013-01-01

    The semantic picture-word interference task has been used to diagnose how speakers resolve competition while selecting words for production. The attentional demands of this resolution process were assessed in 2 dual-task experiments (tone classification followed by picture naming). In Experiment 1, when pictures and distractor words were presented…

  5. Interference of spoken word recognition through phonological priming from visual objects and printed words

    OpenAIRE

    McQueen, J.; Huettig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of pre-exposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g...

  6. Item-cued directed forgetting of related words and pictures in children and adults: selective rehearsal versus cognitive inhibition.

    Science.gov (United States)

    Lehman, E B; McKinley-Pace, M; Leonard, A M; Thompson, D; Johns, K

    2001-01-01

    The main purpose of this study was to compare the relative importance of selective rehearsal and cognitive inhibition in accounting for developmental changes in the directed-forgetting paradigm developed by R. A. Bjork (1972). In two experiments, children in Grades 2 and 5 and college students were asked to remember some words or pictures and to forget others when items were categorically related. Their memory for both items and the associated remember or forget cues was then tested with recall and recognition. Fifth graders recognized more of the forget-cued words than college students did. The pattern of results suggested that age differences in rehearsal and source monitoring (i.e., remembering whether a word had been cued remember or forget) were better explanatory mechanisms for children's forgetting inefficiencies than retrieval inhibition was. The results are discussed in terms of a multiple process view of inhibition.

  7. Multimodal news framing effects

    NARCIS (Netherlands)

    Powell, T.E.

    2017-01-01

    Visuals in news media play a vital role in framing citizens’ political preferences. Yet, compared to the written word, visual images are undervalued in political communication research. Using framing theory, this thesis redresses the balance by studying the combined, or multimodal, effects of visual

  8. Some of the thousand words a picture is worth.

    Science.gov (United States)

    Mandler, J M; Johnson, N S

    1976-09-01

    The effects of real-world schemata on recognition of complex pictures were studied. Two kinds of pictures were used: pictures of objects forming real-world scenes and unorganized collections of the same objects. The recognition test employed distractors that varied four types of information: inventory, spatial location, descriptive and spatial composition. Results emphasized the selective nature of schemata since superior recognition of one kind of information was offset by loss of another. Spatial location information was better recognized in real-world scenes and spatial composition information was better recognized in unorganized scenes. Organized and unorganized pictures did not differ with respect of inventory and descriptive information. The longer the pictures were studied, the longer subjects took to recognize them. Reaction time for hits, misses, and false alarms increased dramatically as presentation time increased from 5 to 60 sec. It was suggested that detection of a difference in a distractor terminated search, but that when no difference was detected, an exhaustive search of the available information took place.

  9. Unimodal and multimodal regions for logographic language processing in left ventral occipitotemporal cortex

    Directory of Open Access Journals (Sweden)

    Yuan eDeng

    2013-09-01

    Full Text Available The human neocortex appears to contain a dedicated visual word form area (VWFA and an adjacent multimodal (visual/auditory area. However, these conclusions are based on functional magnetic resonance imaging (fMRI of alphabetic language processing, languages that have clear grapheme-to-phoneme correspondence (GPC rules that make it difficult to disassociate visual-specific processing from form-to-sound mapping. In contrast, the Chinese language has no clear GPC rules. Therefore, the current study examined whether native Chinese readers also have the same VWFA and multimodal area. Two cross-modal tasks, phonological retrieval of visual words and orthographic retrieval of auditory words, were adopted. Different task requirements were also applied to explore how different levels of cognitive processing modulate activation of putative VWFA-like and multimodal-like regions. Results showed that the left occipitotemporal sulcus responded exclusively to visual inputs and an adjacent region, the left inferior temporal gyrus, showed comparable activation for both visual and auditory inputs. Surprisingly, processing levels did not significantly alter activation of these two regions. These findings indicated that there are both unimodal and multimodal word areas for non-alphabetic language reading, and that activity in these two word-specific regions are independent of task demands at the linguistic level.

  10. Child Readers and the Worlds of the Picture Book

    Science.gov (United States)

    Baird, Adela; Laugharne, Janet; Maagerø, Eva; Tønnessen, Elise Seip

    2016-01-01

    Children as readers of picture books and the ways they respond to, and make meaning from, such texts are the focus of this article, which reports on a small-scale study undertaken in Norway and Wales, UK. The theoretical framing of the research draws on concepts of the multimodal ensemble in picture books and of the reading event as part of a…

  11. The picture superiority effect in a cross-modality recognition task

    OpenAIRE

    Stenberg, Georg; Radeborg, Karl; Hedman, Leif R.

    1995-01-01

    Words and pictures were studied, and recognition tests were given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Exp...

  12. Picture Superiority Doubly Dissociates the ERP Correlates of Recollection and Familiarity

    Science.gov (United States)

    Curran, Tim; Doyle, Jeanne

    2011-01-01

    Two experiments investigated the processes underlying the picture superiority effect on recognition memory. Studied pictures were associated with higher accuracy than studied words, regardless of whether test stimuli were words (Experiment 1) or pictures (Experiment 2). Event-related brain potentials (ERPs) recorded during test suggested that the…

  13. Mechanisms of masked evaluative priming: task sets modulate behavioral and electrophysiological priming for picture and words differentially.

    Science.gov (United States)

    Kiefer, Markus; Liegel, Nathalie; Zovko, Monika; Wentura, Dirk

    2017-04-01

    Research with the evaluative priming paradigm has shown that affective evaluation processes reliably influence cognition and behavior, even when triggered outside awareness. However, the precise mechanisms underlying such subliminal evaluative priming effects, response activation vs semantic processing, are matter of a debate. In this study, we determined the relative contribution of semantic processing and response activation to masked evaluative priming with pictures and words. To this end, we investigated the modulation of masked pictorial vs verbal priming by previously activated perceptual vs semantic task sets and assessed the electrophysiological correlates of priming using event-related potential (ERP) recordings. Behavioral and electrophysiological effects showed a differential modulation of pictorial and verbal subliminal priming by previously activated task sets: Pictorial priming was only observed during the perceptual but not during the semantic task set. Verbal priming, in contrast, was found when either task set was activated. Furthermore, only verbal priming was associated with a modulation of the N400 ERP component, an index of semantic processing, whereas a priming-related modulation of earlier ERPs, indexing visuo-motor S-R activation, was found for both picture and words. The results thus demonstrate that different neuro-cognitive processes contribute to unconscious evaluative priming depending on the stimulus format. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. Conceptual and perceptual factors in the picture superiority effect

    OpenAIRE

    Stenberg, Georg

    2006-01-01

    The picture superiority effect, i.e. better memory for pictures than for corresponding words, has been variously ascribed to a conceptual or a perceptual processing advantage. The present study aimed to disentangle perceptual and conceptual contributions. Pictures and words were tested for recognition in both their original formats and translated into participants´ second language. Multinomial Processing Tree (Batchelder & Riefer, 1999) and MINERVA (Hintzman, 1984) models were fitted to t...

  15. Semantic interference in picture naming during dual-task performance does not vary with reading ability

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Roete, I.E.C.

    2015-01-01

    Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying

  16. Writing words from pictures: what representations are activated, and when?

    Science.gov (United States)

    Bonin, P; Fayol, M

    2000-06-01

    In three experiments, the nature of the representations involved in written picture naming and the time course of their activation were investigated. French participants had to produce picture names while hearing distractors. In Experiment 1, distractors semantically related to the picture names yielded a semantic interference effect when a stimulus onset asynchrony (SOA) of--150 msec, but not when a SOA of 0 msec, was used, in both spoken and written picture naming. Experiment 2 showed that the semantic interference effect was not located at the conceptual level. In Experiment 3, participants wrote down picture names while hearing semantically related, phonologically related, both semantically and phonologically related, or unrelated distractors, presented at both SOAs. A semantic interference effect was obtained with phonologically unrelated distractors but was eliminated with phonologically related distractors. Facilitatory effects of phonologically related distractors were found at both SOAs. The implications of the findings for written picture naming are discussed.

  17. Activation of semantic information at the sublexical level during handwriting production: Evidence from inhibition effects of Chinese semantic radicals in the picture-word interference paradigm.

    Science.gov (United States)

    Chen, Xuqian; Liao, Yuanlan; Chen, Xianzhe

    2017-08-01

    Using a non-alphabetic language (e.g., Chinese), the present study tested a novel view that semantic information at the sublexical level should be activated during handwriting production. Over 80% of Chinese characters are phonograms, in which semantic radicals represent category information (e.g., 'chair,' 'peach,' 'orange' are related to plants) while phonetic radicals represent phonetic information (e.g., 'wolf,' 'brightness,' 'male,' are all pronounced /lang/). Under different semantic category conditions at the lexical level (semantically related in Experiment 1; semantically unrelated in Experiment 2), the orthographic relatedness and semantic relatedness of semantic radicals in the picture name and its distractor were manipulated under different SOAs (i.e., stimulus onset asynchrony, the interval between the onset of the picture and the onset of the interference word). Two questions were addressed: (1) Is it possible that semantic information could be activated in the sublexical level conditions? (2) How are semantic and orthographic information dynamically accessed in word production? Results showed that both orthographic and semantic information were activated under the present picture-word interference paradigm, dynamically under different SOAs, which supported our view that discussions on semantic processes in the writing modality should be extended to the sublexical level. The current findings provide possibility for building new orthography-phonology-semantics models in writing. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  18. Rapid induction of false memory for pictures.

    Science.gov (United States)

    Weinstein, Yana; Shanks, David R

    2010-07-01

    Recognition of pictures is typically extremely accurate, and it is thus unclear whether the reconstructive nature of memory can yield substantial false recognition of highly individuated stimuli. A procedure for the rapid induction of false memories for distinctive colour photographs is proposed. Participants studied a set of object pictures followed by a list of words naming those objects, but embedded in the list were names of unseen objects. When subsequently shown full colour pictures of these unseen objects, participants consistently claimed that they had seen them, while discriminating with high accuracy between studied pictures and new pictures whose names did not appear in the misleading word list. These false memories can be reported with high confidence as well as the feeling of recollection. This new procedure allows the investigation of factors that influence false memory reports with ecologically valid stimuli and of the similarities and differences between true and false memories.

  19. Selection of words for implementation of the Picture Exchange Communication System - PECS in non-verbal autistic children.

    Science.gov (United States)

    Ferreira, Carine; Bevilacqua, Monica; Ishihara, Mariana; Fiori, Aline; Armonia, Aline; Perissinoto, Jacy; Tamanaha, Ana Carina

    2017-03-09

    It is known that some autistic individuals are considered non-verbal, since they are unable to use verbal language and barely use gestures to compensate for the absence of speech. Therefore, these individuals' ability to communicate may benefit from the use of the Picture Exchange Communication System - PECS. The objective of this study was to verify the most frequently used words in the implementation of PECS in autistic children, and on a complementary basis, to analyze the correlation between the frequency of these words and the rate of maladaptive behaviors. This is a cross-sectional study. The sample was composed of 31 autistic children, twenty-five boys and six girls, aged between 5 and 10 years old. To identify the most frequently used words in the initial period of implementation of PECS, the Vocabulary Selection Worksheet was used. And to measure the rate of maladaptive behaviors, we applied the Autism Behavior Checklist (ABC). There was a significant prevalence of items in the category "food", followed by "activities" and "beverages". There was no correlation between the total amount of items identified by the families and the rate of maladaptive behaviors. The categories of words most mentioned by the families could be identified, and it was confirmed that the level of maladaptive behaviors did not interfere directly in the preparation of the vocabulary selection worksheet for the children studied.

  20. Semantic interference from distractor pictures in single-picture naming: evidence for competitive lexical selection.

    Science.gov (United States)

    Jescheniak, Jörg D; Matushanskaya, Asya; Mädebach, Andreas; Müller, Matthias M

    2014-10-01

    Picture-naming studies have demonstrated interference from semantic-categorically related distractor words, but not from corresponding distractor pictures, and the lack of generality of the interference effect has been argued to challenge theories viewing lexical selection in speech production as a competitive process. Here, we demonstrate that semantic interference from context pictures does become visible, if sufficient attention is allocated to them. We combined picture naming with a spatial-cuing procedure. When participants' attention was shifted to the distractor, semantically related distractor pictures interfered with the response, as compared with unrelated distractor pictures. This finding supports models conceiving lexical retrieval as competitive (Levelt, Roelofs, & Meyer, 1999) but is difficult to reconcile with the response exclusion hypothesis (Finkbeiner & Caramazza, 2006b) proposed as an alternative.

  1. The Development of the Picture-Superiority Effect

    Science.gov (United States)

    Whitehouse, Andrew J. O.; Maybery, Murray T.; Durkin, Kevin

    2006-01-01

    When pictures and words are presented serially in an explicit memory task, recall of the pictures is superior. While this effect is well established in the adult population, little is known of the development of this picture-superiority effect in typical development. This task was administered to 80 participants from middle childhood to…

  2. Interference of spoken word recognition through phonological priming from visual objects and printed words

    NARCIS (Netherlands)

    McQueen, J.M.; Hüttig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase

  3. Gist-based conceptual processing of pictures remains intact in patients with amnestic mild cognitive impairment.

    Science.gov (United States)

    Deason, Rebecca G; Hussey, Erin P; Budson, Andrew E; Ally, Brandon A

    2012-03-01

    The picture superiority effect, better memory for pictures compared to words, has been found in young adults, healthy older adults, and, most recently, in patients with Alzheimer's disease and mild cognitive impairment. Although the picture superiority effect is widely found, there is still debate over what drives this effect. One main question is whether it is enhanced perceptual or conceptual information that leads to the advantage for pictures over words. In this experiment, we examined the picture superiority effect in healthy older adults and patients with amnestic mild cognitive impairment (MCI) to better understand the role of gist-based conceptual processing. We had participants study three exemplars of categories as either words or pictures. In the test phase, participants were again shown pictures or words and were asked to determine whether the item was in the same category as something they had studied earlier or whether it was from a new category. We found that all participants demonstrated a robust picture superiority effect, better performance for pictures than for words. These results suggest that the gist-based conceptual processing of pictures is preserved in patients with MCI. While in healthy older adults preserved recollection for pictures could lead to the picture superiority effect, in patients with MCI it is most likely that the picture superiority effect is a result of spared conceptually based familiarity for pictures, perhaps combined with their intact ability to extract and use gist information.

  4. Recent developments in multimodality fluorescence imaging probes

    Directory of Open Access Journals (Sweden)

    Jianhong Zhao

    2018-05-01

    Full Text Available Multimodality optical imaging probes have emerged as powerful tools that improve detection sensitivity and accuracy, important in disease diagnosis and treatment. In this review, we focus on recent developments of optical fluorescence imaging (OFI probe integration with other imaging modalities such as X-ray computed tomography (CT, magnetic resonance imaging (MRI, positron emission tomography (PET, single-photon emission computed tomography (SPECT, and photoacoustic imaging (PAI. The imaging technologies are briefly described in order to introduce the strengths and limitations of each techniques and the need for further multimodality optical imaging probe development. The emphasis of this account is placed on how design strategies are currently implemented to afford physicochemically and biologically compatible multimodality optical fluorescence imaging probes. We also present studies that overcame intrinsic disadvantages of each imaging technique by multimodality approach with improved detection sensitivity and accuracy. KEY WORDS: Optical imaging, Fluorescence, Multimodality, Near-infrared fluorescence, Nanoprobe, Computed tomography, Magnetic resonance imaging, Positron emission tomography, Single-photon emission computed tomography, Photoacoustic imaging

  5. The Picture Superiority Effect in Recognition Memory: A Developmental Study Using the Response Signal Procedure

    Science.gov (United States)

    Defeyter, Margaret Anne; Russo, Riccardo; McPartlin, Pamela Louise

    2009-01-01

    Items studied as pictures are better remembered than items studied as words even when test items are presented as words. The present study examined the development of this picture superiority effect in recognition memory. Four groups ranging in age from 7 to 20 years participated. They studied words and pictures, with test stimuli always presented…

  6. Aging and the picture superiority effect in recall.

    Science.gov (United States)

    Winograd, E; Smith, A D; Simon, E W

    1982-01-01

    One recurrent theme in the literature on aging and memory is that the decline of memory for nonverbal information is steeper than for verbal information. This research compares verbal and visual encoding using the picture superiority effect, the finding that pictures are remembered better than words. In the first experiment, an interaction was found between age and type of material; younger subjects recalled more pictures than words while older subjects did not. However, the overall effect was small and two further experiments were conducted. In both of these experiments, the picture superiority effect was found in both age groups with no interaction. In addition, performing a semantic orienting task had no effect on recall. The finding of a picture superiority effect in older subjects indicates that nonverbal codes can be effectively used by subjects in all age groups to facilitate memory performance.

  7. The picture superiority effect in patients with Alzheimer's disease and mild cognitive impairment.

    Science.gov (United States)

    Ally, Brandon A; Gold, Carl A; Budson, Andrew E

    2009-01-01

    The fact that pictures are better remembered than words has been reported in the literature for over 30 years. While this picture superiority effect has been consistently found in healthy young and older adults, no study has directly evaluated the presence of the effect in patients with Alzheimer's disease (AD) or mild cognitive impairment (MCI). Clinical observations have indicated that pictures enhance memory in these patients, suggesting that the picture superiority effect may be intact. However, several studies have reported visual processing impairments in AD and MCI patients which might diminish the picture superiority effect. Using a recognition memory paradigm, we tested memory for pictures versus words in these patients. The results showed that the picture superiority effect is intact, and that these patients showed a similar benefit to healthy controls from studying pictures compared to words. The findings are discussed in terms of visual processing and possible clinical importance.

  8. When does word frequency influence written production?

    Science.gov (United States)

    Baus, Cristina; Strijkers, Kristof; Costa, Albert

    2013-01-01

    The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  9. When does word frequency influence written production?

    Directory of Open Access Journals (Sweden)

    Cristina eBaus

    2013-12-01

    Full Text Available The aim of the present study was to explore the central (e.g., lexical processing and peripheral processes (motor preparation and execution underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: 1 first keystroke latency and 2 keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analysed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals. The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  10. A Bridge between Pictures and Print.

    Science.gov (United States)

    Jeffree, Dorothy

    1981-01-01

    The experiment investigated the feasibility of bridging the gap between the recognition of pictures and the recognition of words in four mentally handicapped adolescents by adapting a modified version of symbol accentuation (in which a printed word looks like the object it represents). (SB)

  11. An auditory analog of the picture superiority effect.

    Science.gov (United States)

    Crutcher, Robert J; Beer, Jenay M

    2011-01-01

    Previous research has found that pictures (e.g., a picture of an elephant) are remembered better than words (e.g., the word "elephant"), an empirical finding called the picture superiority effect (Paivio & Csapo. Cognitive Psychology 5(2):176-206, 1973). However, very little research has investigated such memory differences for other types of sensory stimuli (e.g. sounds or odors) and their verbal labels. Four experiments compared recall of environmental sounds (e.g., ringing) and spoken verbal labels of those sounds (e.g., "ringing"). In contrast to earlier studies that have shown no difference in recall of sounds and spoken verbal labels (Philipchalk & Rowe. Journal of Experimental Psychology 91(2):341-343, 1971; Paivio, Philipchalk, & Rowe. Memory & Cognition 3(6):586-590, 1975), the experiments reported here yielded clear evidence for an auditory analog of the picture superiority effect. Experiments 1 and 2 showed that sounds were recalled better than the verbal labels of those sounds. Experiment 2 also showed that verbal labels are recalled as well as sounds when participants imagine the sound that the word labels. Experiments 3 and 4 extended these findings to incidental-processing task paradigms and showed that the advantage of sounds over words is enhanced when participants are induced to label the sounds.

  12. Picture-Word Differences and Conceptual Frequency Judgments.

    Science.gov (United States)

    Levin, Joel R.; And Others

    Recent evidence suggests that whereas pictures are more easily recognized, discriminated, associated, and recalled than their corresponding verbal labels, this is not the case in concept acquisition/utilization tasks. If such evidence is interpreted in terms of a "frequency theory" perspective, one would expect the typically obtained…

  13. Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.

    Science.gov (United States)

    Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro

    2011-12-01

    The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights

  14. Extensions of the picture superiority effect in associative recognition.

    Science.gov (United States)

    Hockley, William E; Bancroft, Tyler

    2011-12-01

    Previous research has shown that the picture superiority effect (PSE) is seen in tests of associative recognition for random pairs of line drawings compared to pairs of concrete words (Hockley, 2008). In the present study we demonstrated that the PSE for associative recognition is still observed when subjects have correctly identified the individual items of each pair as old (Experiment 1), and that this effect is not due to rehearsal borrowing (Experiment 2). The PSE for associative recognition also is shown to be present but attenuated for mixed picture-word pairs (Experiment 3), and similar in magnitude for pairs of simple black and white line drawings and coloured photographs of detailed objects (Experiment 4). The results are consistent with the view that the semantic meaning of nameable pictures is activated faster than that of words thereby affording subjects more time to generate and elaborate meaningful associations between items depicted in picture form. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  15. Are pictures good for learning new vocabulary in a foreign language? Only if you think they are not.

    Science.gov (United States)

    Carpenter, Shana K; Olson, Kellie M

    2012-01-01

    The current study explored whether new words in a foreign language are learned better from pictures than from native language translations. In both between-subjects and within-subject designs, Swahili words were not learned better from pictures than from English translations (Experiments 1-3). Judgments of learning revealed that participants exhibited greater overconfidence in their ability to recall a Swahili word from a picture than from a translation (Experiments 2-3), and Swahili words were also considered easier to process when paired with pictures rather than translations (Experiment 4). When this overconfidence bias was eliminated through retrieval practice (Experiment 2) and instructions warning participants to not be overconfident (Experiment 3), Swahili words were learned better from pictures than from translations. It appears, therefore, that pictures can facilitate learning of foreign language vocabulary--as long as participants are not too overconfident in the power of a picture to help them learn a new word.

  16. A Conceptual Paper on the Application of the Picture Word Inductive Model Using Bruner's Constructivist View of Learning and the Cognitive Load Theory

    Science.gov (United States)

    Jiang, Xuan; Perkins, Kyle

    2013-01-01

    Bruner's constructs of learning, specifically the structure of learning, spiral curriculum, and discovery learning, in conjunction with the Cognitive Load Theory, are used to evaluate the Picture Word Inductive Model (PWIM), an inquiry-oriented inductive language arts strategy designed to teach K-6 children phonics and spelling. The PWIM reflects…

  17. The robustness of false memory for emotional pictures.

    Science.gov (United States)

    Bessette-Symons, Brandy A

    2018-02-01

    Emotional material is commonly reported to be more accurately recognised; however, there is substantial evidence of increased false alarm rates (FAR) for emotional material and several reports of stronger influences on response bias than accuracy. This pattern is more frequently reported for words than pictures. Research on the mechanisms underlying bias differences has mostly focused on word lists under short retention intervals. This article presents four series of experiments examining recognition memory for emotional pictures while varying arousal and the control over the content of the pictures at two retention intervals, and one study measuring the relatedness of the series picture sets. Under the shorter retention interval, emotion increased false alarms and reduced accuracy. Under the longer retention interval emotion increased hit rates and FAR, resulting in reduced accuracy and/or bias. At both retention intervals, the pattern of valence effects differed based on the arousal associated with the picture sets. Emotional pictures were found to be more related than neutral pictures in each set; however, the influence of relatedness alone does not provide an adequate explanation for all emotional differences. The results demonstrate substantial emotional differences in picture recognition that vary based on valence, arousal and retention interval.

  18. How lingering representations of abandoned context words affect speech production.

    Science.gov (United States)

    Tydgat, Ilse; Diependaele, Kevin; Hartsuiker, Robert J; Pickering, Martin J

    2012-07-01

    Four experiments tested whether and how initially planned but then abandoned speech can influence the production of a subsequent resumption. Participants named initial pictures, which were sometimes suddenly replaced by target pictures that were related in meaning or word form or were unrelated. They then had to stop and resume with the name of the target picture. Target picture naming latencies were measured separately for trials in which the initial speech was skipped, interrupted, or completed. Semantically related initial pictures helped the production of the target word, although the effect dissipated once the utterance of the initial picture name had been completed. In contrast, phonologically related initial pictures hindered the production of the target word, but only for trials in which the name of the initial picture had at least partly been uttered. This semantic facilitation and phonological interference did not depend on the time interval between the initial and target picture, which was either varied between 200 ms and 400 ms (Experiments 1-2) or was kept constant at 300 ms (Experiments 3-4). We discuss the implications of these results for models of speech self-monitoring and for models of problem-free word production. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. The effect of Trier Social Stress Test (TSST on item and associative recognition of words and pictures in healthy participants

    Directory of Open Access Journals (Sweden)

    Jonathan eGuez

    2016-04-01

    Full Text Available Psychological stress, induced by the Trier Social Stress Test (TSST, has repeatedly been shown to alter memory performance. Although factors influencing memory performance such as stimulus nature (verbal /pictorial and emotional valence have been extensively studied, results whether stress impairs or improves memory are still inconsistent. This study aimed at exploring the effect of TSST on item versus associative memory for neutral, verbal, and pictorial stimuli. 48 healthy subjects were recruited, 24 participants were randomly assigned to the TSST group and the remaining 24 participants were assigned to the control group. Stress reactivity was measured by psychological (subjective state anxiety ratings and physiological (Galvanic skin response recording measurements. Subjects performed an item-association memory task for both stimulus types (words, pictures simultaneously, before, and after the stress/non-stress manipulation. The results showed that memory recognition for pictorial stimuli was higher than for verbal stimuli. Memory for both words and pictures was impaired following TSST; while the source for this impairment was specific to associative recognition in pictures, a more general deficit was observed for verbal material, as expressed in decreased recognition for both items and associations following TSST. Response latency analysis indicated that the TSST manipulation decreased response time but at the cost of memory accuracy. We conclude that stress does not uniformly affect memory; rather it interacts with the task’s cognitive load and stimulus type. Applying the current study results to patients diagnosed with disorders associated with traumatic stress, our findings in healthy subjects under acute stress provide further support for our assertion that patients’ impaired memory originates in poor recollection processing following depletion of attentional resources.

  20. The picture superiority effect in patients with Alzheimer’s disease and mild cognitive impairment

    Science.gov (United States)

    Ally, Brandon A.; Gold, Carl A.; Budson, Andrew E.

    2009-01-01

    The fact that pictures are better remembered than words has been reported in the literature for over 30 years. While this picture superiority effect has been consistently found in healthy young and older adults, no study has directly evaluated the presence of the effect in patients with Alzheimer’s disease (AD) or mild cognitive impairment (MCI). Clinical observations have indicated that pictures enhance memory in these patients, suggesting that the picture superiority effect may be intact. However, several studies have reported visual processing impairments in AD and MCI patients which might diminish the picture superiority effect. Using a recognition memory paradigm, we tested memory for pictures versus words in these patients. The results showed that the picture superiority effect is intact, and that these patients showed a similar benefit to healthy controls from studying pictures compared to words. The findings are discussed in terms of visual processing and possible clinical importance. PMID:18992266

  1. One look is worth a thousand words: New picture stimuli of interpersonal situations.

    Science.gov (United States)

    Fuchs, Simon; Bohleber, Laura M; Ernst, Jutta; Soguel-Dit-Piquard, Jasmine; Boeker, Heinz; Richter, Andre

    2018-06-01

    This paper introduces a picture system that can be used in functional imaging experiments exploring interpersonal relations. This is important for psychotherapy research to understand the neural basis of psychological treatment effects. Pictures have many advantages for the design of functional imaging experiments, but no picture system illustrating interpersonal behavior patterns is, to date, available. We therefore developed, on the basis of a validated card-sorting test, the Interpersonal Relations Picture System. In summary, 43 pictures with 2 or more stick figures in different social situations and 9 control pictures were composed. To test the relation between each picture and the appropriate description, two successive online surveys, including 1058 and 675 individuals respectively, were conducted. Using two question types, the grade expressiveness of each picture was assessed. In total, 24 pictures and 6 control pictures met our criteria for sufficient strength and consistency with the appropriate description. Both measures are correlated with each other in all pictures illustrating interpersonal behavior, but not in the control pictures. Relations to other stimulus types and the applicability of the new picture system in functional neuroimaging methods are discussed. It is concluded that the new system will be helpful in studying the profound effect of relational change in psychotherapy.

  2. Effects on automatic attention due to exposure to pictures of emotional faces while performing Chinese word judgment tasks.

    Science.gov (United States)

    Junhong, Huang; Renlai, Zhou; Senqi, Hu

    2013-01-01

    Two experiments were conducted to investigate the automatic processing of emotional facial expressions while performing low or high demand cognitive tasks under unattended conditions. In Experiment 1, 35 subjects performed low (judging the structure of Chinese words) and high (judging the tone of Chinese words) cognitive load tasks while exposed to unattended pictures of fearful, neutral, or happy faces. The results revealed that the reaction time was slower and the performance accuracy was higher while performing the low cognitive load task than while performing the high cognitive load task. Exposure to fearful faces resulted in significantly longer reaction times and lower accuracy than exposure to neutral faces on the low cognitive load task. In Experiment 2, 26 subjects performed the same word judgment tasks and their brain event-related potentials (ERPs) were measured for a period of 800 ms after the onset of the task stimulus. The amplitudes of the early component of ERP around 176 ms (P2) elicited by unattended fearful faces over frontal-central-parietal recording sites was significantly larger than those elicited by unattended neutral faces while performing the word structure judgment task. Together, the findings of the two experiments indicated that unattended fearful faces captured significantly more attention resources than unattended neutral faces on a low cognitive load task, but not on a high cognitive load task. It was concluded that fearful faces could automatically capture attention if residues of attention resources were available under the unattended condition.

  3. Parallel language activation during word processing in bilinguals: Evidence from word production in sentence context

    NARCIS (Netherlands)

    Starreveld, P.A.; de Groot, A.M.B.; Rossmark, B.M.M.; van Hell, J.G.

    2014-01-01

    In two picture-naming experiments we examined whether bilinguals co-activate the non-target language during word production in the target language. The pictures were presented out-of-context (Experiment 1) or in visually presented sentence contexts (Experiment 2). In both experiments different

  4. ERP correlates of unexpected word forms in a picture–word study of infants and adults

    Science.gov (United States)

    Duta, M.D.; Styles, S.J.; Plunkett, K.

    2012-01-01

    We tested 14-month-olds and adults in an event-related potentials (ERPs) study in which pictures of familiar objects generated expectations about upcoming word forms. Expected word forms labelled the picture (word condition), while unexpected word forms mismatched by either a small deviation in word medial vowel height (mispronunciation condition) or a large deviation from the onset of the first speech segment (pseudoword condition). Both infants and adults showed sensitivity to both types of unexpected word form. Adults showed a chain of discrete effects: positivity over the N1 wave, negativity over the P2 wave (PMN effect) and negativity over the N2 wave (N400 effect). Infants showed a similar pattern, including a robust effect similar to the adult P2 effect. These observations were underpinned by a novel visualisation method which shows the dynamics of the ERP within bands of the scalp over time. The results demonstrate shared processing mechanisms across development, as even subtle deviations from expected word forms were indexed in both age groups by a reduction in the amplitude of characteristic waves in the early auditory evoked potential. PMID:22483072

  5. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  6. Multimodal versus Unimodal Instructions in a Complex Learning Context.

    NARCIS (Netherlands)

    Gellevij, M.R.M.; van der Meij, Hans; de Jong, Anthonius J.M.; Pieters, Julius Marie

    2002-01-01

    Multimodal instruction with text and pictures was compared with unimodal, text-only instruction. More specifically, 44 students used a visual or a textual manual to learn a complex software application. During 2 103–116-min training sessions, cognitive load, and time and ability to recover from

  7. Multimodal cuing of autobiographical memory in semantic dementia.

    Science.gov (United States)

    Greenberg, Daniel L; Ogar, Jennifer M; Viskontas, Indre V; Gorno Tempini, Maria Luisa; Miller, Bruce; Knowlton, Barbara J

    2011-01-01

    Individuals with semantic dementia (SD) have impaired autobiographical memory (AM), but the extent of the impairment has been controversial. According to one report (Westmacott, Leach, Freedman, & Moscovitch, 2001), patient performance was better when visual cues were used instead of verbal cues; however, the visual cues used in that study (family photographs) provided more retrieval support than do the word cues that are typically used in AM studies. In the present study, we sought to disentangle the effects of retrieval support and cue modality. We cued AMs of 5 patients with SD and 5 controls with words, simple pictures, and odors. Memories were elicited from childhood, early adulthood, and recent adulthood; they were scored for level of detail and episodic specificity. The patients were impaired across all time periods and stimulus modalities. Within the patient group, words and pictures were equally effective as cues (Friedman test; χ² = 0.25, p = .61), whereas odors were less effective than both words and pictures (for words vs. odors, χ² = 7.83, p = .005; for pictures vs. odors, χ² = 6.18, p = .01). There was no evidence of a temporal gradient in either group (for patients with SD, χ² = 0.24, p = .89; for controls, χ² < 2.07, p = .35). Once the effect of retrieval support is equated across stimulus modalities, there is no evidence for an advantage of visual cues over verbal cues. The greater impairment for olfactory cues presumably reflects degeneration of anterior temporal regions that support olfactory memory. (c) 2010 APA, all rights reserved.

  8. The word-length effect and disyllabic words.

    Science.gov (United States)

    Lovatt, P; Avons, S E; Masterson, J

    2000-02-01

    Three experiments compared immediate serial recall of disyllabic words that differed on spoken duration. Two sets of long- and short-duration words were selected, in each case maximizing duration differences but matching for frequency, familiarity, phonological similarity, and number of phonemes, and controlling for semantic associations. Serial recall measures were obtained using auditory and visual presentation and spoken and picture-pointing recall. In Experiments 1a and 1b, using the first set of items, long words were better recalled than short words. In Experiments 2a and 2b, using the second set of items, no difference was found between long and short disyllabic words. Experiment 3 confirmed the large advantage for short-duration words in the word set originally selected by Baddeley, Thomson, and Buchanan (1975). These findings suggest that there is no reliable advantage for short-duration disyllables in span tasks, and that previous accounts of a word-length effect in disyllables are based on accidental differences between list items. The failure to find an effect of word duration casts doubt on theories that propose that the capacity of memory span is determined by the duration of list items or the decay rate of phonological information in short-term memory.

  9. Picture This: 4-H Press Corps Builds Life Skills

    Science.gov (United States)

    Clary, Christy D.

    2018-01-01

    A picture is worth a thousand words! Extension professionals are often looking for the picture that best captures an event and tells its story. Look beneath the surface, though, and a picture is worth much more. Developing a 4-H press corps results in a collection of useful photos but has the added benefit of providing 4-H members with an…

  10. Classic Classroom Activities: The Oxford Picture Dictionary Program.

    Science.gov (United States)

    Weiss, Renee; Adelson-Goldstein, Jayme; Shapiro, Norma

    This teacher resource book offers over 100 reproducible communicative practice activities and 768 picture cards based on the vocabulary of the Oxford Picture Dictionary. Teacher's notes and instructions, including adaptations for multilevel classes, are provided. The activities book has up-to-date art and graphics, explaining over 3700 words. The…

  11. On the facilitatory effects of cognate words in bilingual speech production.

    Science.gov (United States)

    Costa, Albert; Santesteban, Mikel; Caño, Agnès

    2005-07-01

    There is a growing body of evidence showing that a word's cognate status is an important dimension affecting the naming performance of bilingual speakers. In a recent article, Kohnert extended this observation to the naming performance of an aphasic bilingual (DJ). DJ named pictures with cognate names more accurately than pictures with non-cognate names. Furthermore, having named the pictures in Spanish helped the subsequent retrieval (with a delay of one week between the two tests) of the same pictures' names in English, but only for pictures with cognate names. That is, there was a language transfer but only for those translation words that were phonologically similar. In this article we first evaluate the conclusions drawn from these results by Kohnert, and second we discuss the theoretical implications of the facilitatory effects of cognate words for models of speech production in bilingual speakers.

  12. Effects of auditory and visual modalities in recall of words.

    Science.gov (United States)

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  13. Effect of post-encoding emotion on recollection and familiarity for pictures.

    Science.gov (United States)

    Wang, Bo; Ren, Yanju

    2017-07-01

    Although prior studies have examined the effect of post-encoding emotional arousal on recognition memory for words, it is unknown whether the enhancement effect observed on words generalizes to pictures. Furthermore, prior studies using words have showed that the effect of emotional arousal can be modulated by stimuli valence and delay in emotion induction, but it is unclear whether such modulation can extend to pictures and whether other factors such as encoding method (incidental vs. intentional encoding) can be modulatory. Five experiments were conducted to answer these questions. In Experiment 1, participants encoded a list of neutral and negative pictures and then watched a 3-min neutral or negative video. The delayed test showed that negative arousal impaired recollection regardless of picture valence but had no effect on familiarity. Experiment 2 replicated the above findings. Experiment 3 was similar to Experiment 1 except that participants watched a 3-min neutral, negative, or positive video and conducted free recall before the recognition test. Unlike the prior two experiments, the impairment effect of negative arousal disappeared. Experiment 4, where the free recall task was eliminated, replicated the results from Experiment 3. Experiment 5 replicated Experiments 1 and 2 and further showed that the impairment effects of negative arousal could be modulated by delay in emotion induction but not by encoding method or stimuli valence. Taken together, the current study suggests that the enhancement effect observed on words may not generalize to pictures.

  14. Distinct patterns of brain activity characterise lexical activation and competition in spoken word production.

    Directory of Open Access Journals (Sweden)

    Vitória Piai

    Full Text Available According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog with distractor words. The distractor and picture name were semantically related (cat, unrelated (pin, or identical (dog. Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350-650 ms (4-10 Hz in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.

  15. NIH Abroad: Pictures Are Crowd Pullers

    Science.gov (United States)

    ... turn Javascript on. "Pictures Are Crowd Pullers …" Art, culture, and the Internet combine to intervene against malaria ... Not ripe mangoes. Not witchcraft. The images and words, which speak directly to local beliefs in villages ...

  16. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  17. A Multimodal Search Engine for Medical Imaging Studies.

    Science.gov (United States)

    Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos

    2017-02-01

    The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.

  18. Gender Differences in Emotional Language in Children's Picture Books.

    Science.gov (United States)

    Tepper, Clary A.; Cassidy, Kimberly Wright

    1999-01-01

    Examined gender differences in emotional language in children's picture books, using 178 books read to or by preschool children. Males had higher representations on titles, pictures, and central roles, but males and females were associated with equal amounts of emotional language and similar types of emotional words. (SLD)

  19. Narrative-Based Intervention for Word-Finding Difficulties: A Case Study

    Science.gov (United States)

    Marks, Ian; Stokes, Stephanie F.

    2010-01-01

    Background: Children with word-finding difficulties manifest a high frequency of word-finding characteristics in narrative, yet word-finding interventions have concentrated on single-word treatments and outcome measures. Aims: This study measured the effectiveness of a narrative-based intervention in improving single-word picture-naming and…

  20. A collective theory of happiness: words related to the word "happiness" in Swedish online newspapers.

    Science.gov (United States)

    Garcia, Danilo; Sikström, Sverker

    2013-06-01

    It may be suggested that the representation of happiness in online media is collective in nature because it is a picture of happiness communicated by relatively few individuals to the masses. The present study is based on articles published in Swedish daily online newspapers in 2010; the data corpus comprises 1.5 million words. We investigated which words were most (un)common in articles containing the word "happiness" as compared with articles not containing this word. The results show that words related to people (by use of all relevant pronouns: you/me and us/them); important others (e.g., grandmother, mother); the Swedish royal wedding (e.g., Prince Daniel, Princess Victoria); and the FIFA World Cup (e.g., Zlatan, Argentina, Drogba) were highly recurrent in articles containing the word happiness. In contrast, words related to objects, such as money (e.g., millions, billions), bestselling gadgets (e.g., iPad, iPhone), and companies (e.g., Google, Windows), were predictive of contexts not recurrent with the word happiness. The results presented here are in accordance with findings in the happiness literature showing that relationships, not material things, are what make people happy. We suggest that our findings mirror a collective theory of happiness, that is, a shared picture or agreement, among members of a community, concerning what makes people happy. The fact that this representation is made public on such a large scale makes it collective in nature.

  1. Reading Pictures for Story Comprehension Requires Mental Imagery Skills

    NARCIS (Netherlands)

    Boerma, Inouk E; Mol, Suzanne E; Jolles, Jelle

    2016-01-01

    We examined the role of mental imagery skills on story comprehension in 150 fifth graders (10- to 12-year-olds), when reading a narrative book chapter with alternating words and pictures (i.e., text blocks were alternated by one- or two-page picture spreads). A parallel group design was used, in

  2. Effect of Perceptual Load on Semantic Access by Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Herve

    2013-01-01

    Purpose: To examine whether semantic access by speech requires attention in children. Method: Children ("N" = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual- dynamic face) picture word task. The cross-modal task had a low load,…

  3. Age Differences in Adults' Free Recall of Pictures and Words.

    Science.gov (United States)

    Gounard, Beverley Roberts; Keitz, Suzanne M.

    This study was designed to determine whether adults' memory for pictorial and word stimuli might be differentially affected by age. Twenty female secretaries, median age 22.1, and 20 female members of a senior citizens' center, median age 69.4, were asked to learn lists of pictorial and word stimuli under free recall conditions. Eight trials were…

  4. Relation Education Index Norms for 500 Picture Pairs and 10 Relations: High School Sample. Technical Report No. 6.

    Science.gov (United States)

    Haynes, James L.; And Others

    Mode of presentation (word vs. picture) is said to be a factor in social class differences in performance on analogy tests. To investigate this contention, data were needed on equivalent word and picture analogy test performance. This report presents data on relation education index (REI) norms for 500 picture pairs collected in the process of…

  5. Stress priming in picture naming: an SOA study.

    Science.gov (United States)

    Schiller, Niels O; Fikkert, Paula; Levelt, Clara C

    2004-01-01

    This study investigates whether or not the representation of lexical stress information can be primed during speech production. In four experiments, we attempted to prime the stress position of bisyllabic target nouns (picture names) having initial and final stress with auditory prime words having either the same or different stress as the target (e.g., WORtel-MOtor vs. koSTUUM-MOtor; capital letters indicate stressed syllables in prime-target pairs). Furthermore, half of the prime words were semantically related, the other half unrelated. Overall, picture names were not produced faster when the prime word had the same stress as the target than when the prime had different stress, i.e., there was no stress-priming effect in any experiment. This result would not be expected if stress were stored in the lexicon. However, targets with initial stress were responded to faster than final-stress targets. The reason for this effect was neither the quality of the pictures nor frequency of occurrence or voice-key characteristics. We hypothesize here that this stress effect is a genuine encoding effect, i.e., words with stress on the second syllable take longer to be encoded because their stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular with respect to metrical stress rules in Dutch. The results of the experiments are discussed in the framework of models of phonological encoding.

  6. Dynamic Influence of Emotional States on Novel Word Learning

    Science.gov (United States)

    Guo, Jingjing; Zou, Tiantian; Peng, Danling

    2018-01-01

    Many researchers realize that it's unrealistic to isolate language learning and processing from emotions. However, few studies on language learning have taken emotions into consideration so far, so that the probable influences of emotions on language learning are unclear. The current study thereby aimed to examine the effects of emotional states on novel word learning and their dynamic changes with learning continuing and task varying. Positive, negative or neutral pictures were employed to induce a given emotional state, and then participants learned the novel words through association with line-drawing pictures in four successive learning phases. At the end of each learning phase, participants were instructed to fulfill a semantic category judgment task (in Experiment 1) or a word-picture semantic consistency judgment task (in Experiment 2) to explore the effects of emotional states on different depths of word learning. Converging results demonstrated that negative emotional state led to worse performance compared with neutral condition; however, how positive emotional state affected learning varied with learning task. Specifically, a facilitative role of positive emotional state in semantic category learning was observed but disappeared in word specific meaning learning. Moreover, the emotional modulation on novel word learning was quite dynamic and changeable with learning continuing, and the final attainment of the learned words tended to be similar under different emotional states. The findings suggest that the impact of emotion can be offset when novel words became more and more familiar and a part of existent lexicon. PMID:29695994

  7. Morphing Images: A Potential Tool for Teaching Word Recognition to Children with Severe Learning Difficulties

    Science.gov (United States)

    Sheehy, Kieron

    2005-01-01

    Children with severe learning difficulties who fail to begin word recognition can learn to recognise pictures and symbols relatively easily. However, finding an effective means of using pictures to teach word recognition has proved problematic. This research explores the use of morphing software to support the transition from picture to word…

  8. Going political – multimodal metaphor framings on a cover of the sports newspaper A Bola

    Directory of Open Access Journals (Sweden)

    Maria Clotilde Almeida

    2016-12-01

    Full Text Available This paper analyses a political-oriented multimodal metaphor on a cover of the sports newspaper A Bola, sequencing another study on multimodal metaphors deployed on the covers of the very same sports newspaper pertaining to the 2014 Football World Cup in Brazil (Almeida/Sousa, 2015 in the light of Forceville (2009, 2012. The fact that European politics is mapped onto football in multimodal metaphors on this sports newspaper cover draws on the interplay of conceptual metaphors, respectively in the visual mode and in the written mode. Furthermore, there is a relevant time-bound leitmotif which motivates the mapping of politics onto football in the sports newspaper A Bola, namely the upcoming football match between Portugal and Germany. In the multimodal framing of the story line under analysis. The visual mode apparently assumes preponderance, since a picture of Angela Merkel, a prominent leader of EU, is clearly overshadowed by a large picture of Cristiano Ronaldo, the captain of the Portuguese National Football team. However, the visual modality of Cristiano Ronaldo’s dominance over Angela Merkel is intertwined with the powerful metaphorical headline “Vamos expulsar a Alemanha do Euro” (“Let’s kick Germany out of the European Championship”, intended to boost the courage of the Portuguese national football team: “Go Portugal – you can win this time!”. Thus, differently from multimodal metaphors on other covers of the same newspaper, the visual modality in this case cannot be considered the dominant factor in multimodal meaning creation in this politically-oriented layout.Keywords: Multimodal Metaphors. Sports and Politics. Metaphors in Sports Newspapers.

  9. Directed forgetting of complex pictures in an item method paradigm.

    Science.gov (United States)

    Hauswald, Anne; Kissler, Johanna

    2008-11-01

    An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect--that is, better recognition of "to-be-remembered" than of "to-be-forgotten" pictures--was observed, although its size was smaller than previously reported for words or line drawings. The magnitude of the directed forgetting effect correlated negatively with participants' depression and dissociation scores. The results indicate that, at least in an item method, directed forgetting occurs for complex pictures as well as words and simple line drawings. Furthermore, people with higher levels of dissociative or depressive symptoms exhibit altered memory encoding patterns.

  10. Subjective qualities of memories associated with the picture superiority effect in schizophrenia.

    Science.gov (United States)

    Huron, Caroline; Danion, Jean-Marie; Rizzo, Lydia; Killofer, Valérie; Damiens, Annabelle

    2003-02-01

    Patients with schizophrenia (n = 24) matched with 24 normal subjects were presented with both words and pictures. On a recognition memory task, they were asked to give remember, know, or guess responses to items that were recognized on the basis of conscious recollection, familiarity, or guessing, respectively. Compared with normal subjects, patients exhibited a lower picture superiority effect selectively related to remember responses. Unlike normal subjects, they did not exhibit any word superiority effect in relation to guess responses; this explains why the overall picture superiority effect appeared to be intact. These results emphasize the need to take into account the subjective states of awareness when analyzing memory impairments in schizophrenia.

  11. Preserved conceptual implicit memory for pictures in patients with Alzheimer’s disease

    OpenAIRE

    Deason, Rebecca G.; Hussey, Erin P.; Flannery, Sean; Ally, Brandon A.

    2015-01-01

    The current study examined different aspects of conceptual implicit memory in patients with mild Alzheimer’s disease (AD). Specifically, we were interested in whether priming of distinctive conceptual features versus general semantic information related to pictures and words would differ for the mild AD patients and healthy older adults. In this study, 14 healthy older adults and 15 patients with mild AD studied both pictures and words followed by an implicit test section, where they were ask...

  12. Oral-diadochokinetic rates for Hebrew-speaking school-age children: real words vs. non-words repetition.

    Science.gov (United States)

    Icht, Michal; Ben-David, Boaz M

    2015-02-01

    Oral-diadochokinesis (DDK) tasks are a common tool for evaluating speech disorders. Usually, these tasks involve repetitions of non-words. It has been suggested that repeating real words can be more suitable for preschool children. But, the impact of using real words with elementary school children has not been studied yet. This study evaluated oral-DDK rates for Hebrew-speaking elementary school children using non-words and real words. The participants were 60 children, 9-11 years old, with normal speech and language development, who were asked to repeat "pataka" (non-word) and "bodeket" (Hebrew real word). Data replicate the advantage generally found for real word repetition with preschoolers. Children produced real words faster than non-words for all age groups, and repetition rates were higher for the older children. The findings suggest that adding real words to the standard oral-DDK task with elementary school children may provide a more comprehensive picture of oro-motor function.

  13. Neurophysiological evidence for the interplay of speech segmentation and word-referent mapping during novel word learning.

    Science.gov (United States)

    François, Clément; Cunillera, Toni; Garcia, Enara; Laine, Matti; Rodriguez-Fornells, Antoni

    2017-04-01

    Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Task choice and semantic interference in picture naming

    OpenAIRE

    Piai, V.; Roelofs, A.P.A.; Schriefers, H.J.

    2015-01-01

    Evidence from dual-task performance indicates that speakers prefer not to select simultaneous responses in picture naming and another unrelated task, suggesting a response selection bottleneck in naming. In particular, when participants respond to tones with a manual response and name pictures with superimposed semantically related or unrelated distractor words, semantic interference in naming tends to be constant across stimulus onset asynchronies (SOAs) between the tone stimulus and the pic...

  15. Improved vocabulary production after naming therapy in aphasia: can gains in picture naming generalize to connected speech?

    Science.gov (United States)

    Conroy, Paul; Sage, Karen; Ralph, Matt Lambon

    2009-01-01

    Naming accuracy for nouns and verbs in aphasia can vary across different elicitation contexts, for example, simple picture naming, composite picture description, narratives, and conversation. For some people with aphasia, naming may be more accurate to simple pictures as opposed to naming in spontaneous, connected speech; for others, the opposite pattern may be evident. These differences have, in some instances, been related to word class (for example, noun or verb) as well as aphasia subtype. Given that the aim of picture-naming therapies is to improve word-finding in general, these differences in naming accuracy across contexts may have important implications for the potential functional benefits of picture-naming therapies. This study aimed to explore single-word therapy for both nouns and verbs, and to answer the following questions. (1) To what extent does an increase in naming accuracy after picture-naming therapy (for both nouns and verbs) predict accurate naming of the same items in less constrained spontaneous connected speech tasks such as composite picture description and retelling of a narrative? (2) Does the word class targeted in therapy (verb or noun) dictate whether there is 'carry-over' of the therapy item to connected speech tasks? (3) Does the speed at which the picture is named after therapy predict whether it will also be used appropriately in connected speech tasks? Seven participants with aphasia of varying degrees of severity and subtype took part in ten therapy sessions over five weeks. A set of potentially useful items was collected from control participant accounts of the Cookie Theft Picture Description and the Cinderella Story from the Quantitative Production Analysis. Twenty-four of these words (twelve verbs and twelve nouns) were collated for each participant, on the basis that they had failed to name them in either simple picture naming or connected speech tasks (picture-supported narrative and unsupported retelling of a narrative

  16. Learning and memory for sequences of pictures, words, and spatial locations: an exploration of serial position effects.

    Science.gov (United States)

    Bonk, William J; Healy, Alice F

    2010-01-01

    A serial reproduction of order with distractors task was developed to make it possible to observe successive snapshots of the learning process at each serial position. The new task was used to explore the effect of several variables on serial memory performance: stimulus content (words, blanks, and pictures), presentation condition (spatial information vs. none), semantically categorized item clustering (grouped vs. ungrouped), and number of distractors relative to targets (none, equal, double). These encoding and retrieval variables, along with learning attempt number, affected both overall performance levels and the shape of the serial position function, although a large and extensive primacy advantage and a small 1-item recency advantage were found in each case. These results were explained well by a version of the scale-independent memory, perception, and learning model that accounted for improved performance by increasing the value of only a single parameter that reflects reduced interference from distant items.

  17. Picturing survival memories: enhanced memory after fitness-relevant processing occurs for verbal and visual stimuli.

    Science.gov (United States)

    Otgaar, Henry; Smeets, Tom; van Bergen, Saskia

    2010-01-01

    Recent studies have shown that processing words according to a survival scenario leads to superior retention relative to control conditions. Here, we examined whether a survival recall advantage could be elicited by using pictures. Furthermore, in Experiment 1, we were interested in whether survival processing also results in improved memory for details. Undergraduates rated the relevance of pictures in a survival, moving, or pleasantness scenario and were subsequently given a surprise free recall test. We found that survival processing yielded superior retention. We also found that distortions occurred more often in the survival condition than in the pleasantness condition. In Experiment 2, we directly compared the survival recall effect between pictures and words. A comparable survival recall advantage was found for pictures and words. The present findings support the idea that memory is enhanced by processing information in terms of fitness value, yet at the same time, the present results suggest that this may increase the risk for memory distortions.

  18. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants.

    Science.gov (United States)

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas

    2017-01-01

    Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.

  19. Picture-books: first structured reading materials for children

    Directory of Open Access Journals (Sweden)

    Ivana Martinović

    2012-11-01

    Full Text Available Early literacy has recently become a current topic, and there’s a widespread belief that literacy startsdeveloping almost as soon as the child is born, if the child is surrounded with adequate materials and persons who will motivate the development of literacy. The first structured reading materials that a child interacts with are picture-books. It is usually the first contact a child has with literature and a written word in general, and it happens during childhood, the child's most sensitive period, which is why it is important to pay special attention to the quality of picture-books. Croatian picture-books published till the early 80ies of the past century have been investigated to a some extent. However, the picture-books found on the Croatian market and in the libraries in the past 30 years have been the subject of research only sporadically. There's little data on the quality and features of this multifunctional material that is of such great importance for children. The aim of the paper is to give an overview of the relevant data found in literature on the historical development of picture-book publishing, their features, functions they help develop, their age-appropriateness, and quality. The paper presents research results stemming from the analysis of the Croatian Children's Book Centre documentation on contemporary picture-book publishing and data on the language of picture-books that are the result of a picture-book corpus study made as part of the PhD research by the author. The data on contemporary authors and illustrators was obtained by analysing the documentation of the Croatian Library Association, Commission for library services for children and youth. The language of the picture-book corpus was analysed using a computer programme, i.e. the analysis was conducted of the lexical diversity of picture-books for three-year olds. The picture-books have not been investigated from the linguistic perspective before, which makes this

  20. Automatic lip reading by using multimodal visual features

    Science.gov (United States)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  1. Teach yourself visually Word 2013

    CERN Document Server

    Marmel, Elaine

    2013-01-01

    Get up to speed on the newest version of Word with visual instruction Microsoft Word is the standard for word processing programs, and the newest version offers additional functionality you'll want to use. Get up to speed quickly and easily with the step-by-step instructions and full-color screen shots in this popular guide! You'll see how to perform dozens of tasks, including how to set up and format documents and text; work with diagrams, charts, and pictures; use Mail Merge; post documents online; and much more. Easy-to-follow, two-page lessons make learning a snap.Full-

  2. Are Pictures Good for Learning New Vocabulary in a Foreign Language? Only If You Think They Are Not

    Science.gov (United States)

    Carpenter, Shana K.; Olson, Kellie M.

    2012-01-01

    The current study explored whether new words in a foreign language are learned better from pictures than from native language translations. In both between-subjects and within-subject designs, Swahili words were not learned better from pictures than from English translations (Experiments 1-3). Judgments of learning revealed that participants…

  3. When Wine and Apple Both Help the Production of Grapes: ERP Evidence for Post-lexical Semantic Facilitation in Picture Naming.

    Science.gov (United States)

    Python, Grégoire; Fargier, Raphaël; Laganaro, Marina

    2018-01-01

    Background : Producing a word in referential naming requires to select the right word in our mental lexicon among co-activated semantically related words. The mechanisms underlying semantic context effects during speech planning are still controversial, particularly for semantic facilitation which investigation remains under-represented in contrast to the plethora of studies dealing with interference. Our aim is to study the time-course of semantic facilitation in picture naming, using a picture-word "interference" paradigm and event-related potentials (ERPs). Methods : We compared two different types of semantic relationships, associative and categorical, in a single word priming and a double word priming paradigm. The primes were presented visually with a long negative Stimulus Onset Asynchrony (SOA), which is expected to cause facilitation. Results : Shorter naming latencies were observed after both associative and categorical primes, as compared to unrelated primes, and even shorter latencies after two primes. Electrophysiological results showed relatively late modulations of waveform amplitudes for both types of primes (beginning ~330 ms post picture onset with a single prime and ~275 ms post picture onset with two primes), corresponding to a shift in latency of similar topographic maps across conditions. Conclusion : The present results are in favor of a post-lexical locus of semantic facilitation for associative and categorical priming in picture naming and confirm that semantic facilitation is as relevant as semantic interference to inform on word production. The post-lexical locus argued here might be related to self-monitoting or/and to modulations at the level of word-form planning, without excluding the participation of strategic processes.

  4. The Intersection of Words and Pictures: Second through Fourth Graders Read Graphic Novels

    Science.gov (United States)

    Boerman-Cornell, William

    2016-01-01

    This study analyzes how second, third, and fourth graders in a racially integrated suburban school engaged in multimodal meaning making in the context of a book club discussing Ben Hatke's graphic novel "Zita the Spacegirl." Qualitative analysis of field notes and assessments indicated three overall findings: First, students responded to…

  5. Negative induced mood influences word production: An event-related potentials study with a covert picture naming task.

    Science.gov (United States)

    Hinojosa, J A; Fernández-Folgueiras, U; Albert, J; Santaniello, G; Pozo, M A; Capilla, A

    2017-01-27

    The present event-related potentials (ERPs) study investigated the effects of mood on phonological encoding processes involved in word generation. For this purpose, negative, positive and neutral affective states were induced in participants during three different recording sessions using short film clips. After the mood induction procedure, participants performed a covert picture naming task in which they searched letters. The negative compared to the neutral mood condition elicited more negative amplitudes in a component peaking around 290ms. Furthermore, results from source localization analyses suggested that this activity was potentially generated in the left prefrontal cortex. In contrast, no differences were found in the comparison between positive and neutral moods. Overall, current data suggest that processes involved in the retrieval of phonological information during speech generation are impaired when participants are in a negative mood. The mechanisms underlying these effects were discussed in relation to linguistic and attentional processes, as well as in terms of the use of heuristics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Picture Books as Mentor Texts for 10th Grade Struggling Writers

    Science.gov (United States)

    Premont, David Willett; Young, Terrell A.; Wilcox, Brad; Dean, Deborah; Morrison, Timothy G.

    2017-01-01

    The purpose of this study was to determine if picture books in high school classrooms could enhance word choice, sentence fluency, and conventions. Previous research has not fully considered employing picture books as mentor texts in high schools. Twelve participants from two low-performing 10th grade English classes were identified as low-,…

  7. Semantic category interference in overt picture naming

    NARCIS (Netherlands)

    Maess, B.; Friederici, A.D.; Damian, M.F.; Meyer, A.S.; Levelt, W.J.M.

    2002-01-01

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures

  8. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants.

    Directory of Open Access Journals (Sweden)

    Christian Stephan-Otto

    Full Text Available Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities.A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted.Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures.The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.

  9. The organization of words and environmental sounds in memory.

    Science.gov (United States)

    Hendrickson, Kristi; Walenski, Matthew; Friend, Margaret; Love, Tracy

    2015-03-01

    In the present study we used event-related potentials to compare the organization of linguistic and meaningful nonlinguistic sounds in memory. We examined N400 amplitudes as adults viewed pictures presented with words or environmental sounds that matched the picture (Match), that shared semantic features with the expected match (Near Violation), and that shared relatively few semantic features with the expected match (Far Violation). Words demonstrated incremental N400 amplitudes based on featural similarity from 300-700ms, such that both Near and Far Violations exhibited significant N400 effects, however Far Violations exhibited greater N400 effects than Near Violations. For environmental sounds, Far Violations but not Near Violations elicited significant N400 effects, in both early (300-400ms) and late (500-700ms) time windows, though a graded pattern similar to that of words was seen in the mid-latency time window (400-500ms). These results indicate that the organization of words and environmental sounds in memory is differentially influenced by featural similarity, with a consistently fine-grained graded structure for words but not sounds. Published by Elsevier Ltd.

  10. Locus of Semantic Interference in Picture Naming: Evidence from Dual-Task Performance

    Science.gov (United States)

    Piai, Vitória; Roelofs, Ardi; Schriefers, Herbert

    2014-01-01

    Disagreement exists regarding the functional locus of semantic interference of distractor words in picture naming. This effect is a cornerstone of modern psycholinguistic models of word production, which assume that it arises in lexical response-selection. However, recent evidence from studies of dual-task performance suggests a locus in…

  11. Fast Brain Plasticity during Word Learning in Musically-Trained Children.

    Science.gov (United States)

    Dittinger, Eva; Chobert, Julie; Ziegler, Johannes C; Besson, Mireille

    2017-01-01

    Children learn new words every day and this ability requires auditory perception, phoneme discrimination, attention, associative learning and semantic memory. Based on previous results showing that some of these functions are enhanced by music training, we investigated learning of novel words through picture-word associations in musically-trained and control children (8-12 year-old) to determine whether music training would positively influence word learning. Results showed that musically-trained children outperformed controls in a learning paradigm that included picture-sound matching and semantic associations. Moreover, the differences between unexpected and expected learned words, as reflected by the N200 and N400 effects, were larger in children with music training compared to controls after only 3 min of learning the meaning of novel words. In line with previous results in adults, these findings clearly demonstrate a correlation between music training and better word learning. It is argued that these benefits reflect both bottom-up and top-down influences. The present learning paradigm might provide a useful dynamic diagnostic tool to determine which perceptive and cognitive functions are impaired in children with learning difficulties.

  12. Fast Brain Plasticity during Word Learning in Musically-Trained Children

    Directory of Open Access Journals (Sweden)

    Eva Dittinger

    2017-05-01

    Full Text Available Children learn new words every day and this ability requires auditory perception, phoneme discrimination, attention, associative learning and semantic memory. Based on previous results showing that some of these functions are enhanced by music training, we investigated learning of novel words through picture-word associations in musically-trained and control children (8–12 year-old to determine whether music training would positively influence word learning. Results showed that musically-trained children outperformed controls in a learning paradigm that included picture-sound matching and semantic associations. Moreover, the differences between unexpected and expected learned words, as reflected by the N200 and N400 effects, were larger in children with music training compared to controls after only 3 min of learning the meaning of novel words. In line with previous results in adults, these findings clearly demonstrate a correlation between music training and better word learning. It is argued that these benefits reflect both bottom-up and top-down influences. The present learning paradigm might provide a useful dynamic diagnostic tool to determine which perceptive and cognitive functions are impaired in children with learning difficulties.

  13. The role of age on reactivity and memory for emotional pictures.

    Science.gov (United States)

    Christianson, S A; Fällman, L

    1990-01-01

    The purpose of this study was to investigate subjects' reactivity to emotional pictures and their recollection of these pictures, and to examine these two factors as they relate to age. Adolescents and young adults were shown emotionally arousing scenic pictures for long (4-s) and very brief (50-ms) durations. Recognition of the pictures and recall and recognition of words presented along with the pictures were assessed both immediately after the presentation and six weeks later. The results showed that very negative pictures are retained better than neutral or even positive pictures, and that very negative pictures reduce memory for associated information. It was also found that adolescents show a somewhat lower reactivity to very negative pictures and a higher degree of retention of these pictures than adults. The results are discussed in relation to (a) habituation effects, (b) strategies that subjects might develop to block emotional involvement, and (c) the notion that watching violence might serve as a powerful prime to socially undesirable behaviour.

  14. Font Size Matters—Emotion and Attention in Cortical Responses to Written Words

    OpenAIRE

    Bayer, Mareike; Sommer, Werner; Schacht, Annekathrin

    2012-01-01

    For emotional pictures with fear-, disgust-, or sex-related contents, stimulus size has been shown to increase emotion effects in attention-related event-related potentials (ERPs), presumably reflecting the enhanced biological impact of larger emotion-inducing pictures. If this is true, size should not enhance emotion effects for written words with symbolic and acquired meaning. Here, we investigated ERP effects of font size for emotional and neutral words. While P1 and N1 amplitu...

  15. Effects of Opportunities for Word Retrieval during Second Language Vocabulary Learning

    Science.gov (United States)

    Barcroft, Joe

    2007-01-01

    Research suggests that memory for an item improves when one is allowed to retrieve the item (Slamecka & Graf, 1978). This study explored benefits of providing opportunities for target-word retrieval during second language vocabulary learning. English speakers studied new Spanish words while viewing 24 word-picture pairs. They first viewed all 24…

  16. Domain-specific and domain-general constraints on word and sequence learning.

    Science.gov (United States)

    Archibald, Lisa M D; Joanisse, Marc F

    2013-02-01

    The relative influences of language-related and memory-related constraints on the learning of novel words and sequences were examined by comparing individual differences in performance of children with and without specific deficits in either language or working memory. Children recalled lists of words in a Hebbian learning protocol in which occasional lists repeated, yielding improved recall over the course of the task on the repeated lists. The task involved presentation of pictures of common nouns followed immediately by equivalent presentations of the spoken names. The same participants also completed a paired-associate learning task involving word-picture and nonword-picture pairs. Hebbian learning was observed for all groups. Domain-general working memory constrained immediate recall, whereas language abilities impacted recall in the auditory modality only. In addition, working memory constrained paired-associate learning generally, whereas language abilities disproportionately impacted novel word learning. Overall, all of the learning tasks were highly correlated with domain-general working memory. The learning of nonwords was additionally related to general intelligence, phonological short-term memory, language abilities, and implicit learning. The results suggest that distinct associations between language- and memory-related mechanisms support learning of familiar and unfamiliar phonological forms and sequences.

  17. Defining a Conceptual Topography of Word Concreteness: Clustering Properties of Emotion, Sensation, and Magnitude among 750 English Words

    Directory of Open Access Journals (Sweden)

    Joshua Troche

    2017-10-01

    Full Text Available Cognitive science has a longstanding interest in the ways that people acquire and use abstract vs. concrete words (e.g., truth vs. piano. One dominant theory holds that abstract and concrete words are subserved by two parallel semantic systems. We recently proposed an alternative account of abstract-concrete word representation premised upon a unitary, high dimensional semantic space wherein word meaning is nested. We hypothesize that a range of cognitive and perceptual dimensions (e.g., emotion, time, space, color, size, visual form bound this space, forming a conceptual topography. Here we report a normative study where we examined the clustering properties of a sample of English words (N = 750 spanning a spectrum of concreteness in a continuous manner from highly abstract to highly concrete. Participants (N = 328 rated each target word on a range of 14 cognitive dimensions (e.g., color, emotion, valence, polarity, motion, space. The dimensions reduced to three factors: Endogenous factor, Exogenous factor, and Magnitude factor. Concepts were plotted in a unified, multimodal space with concrete and abstract concepts along a continuous continuum. We discuss theoretical implications and practical applications of this dataset. These word norms are freely available for download and use at http://www.reilly-coglab.com/data/.

  18. Defining a Conceptual Topography of Word Concreteness: Clustering Properties of Emotion, Sensation, and Magnitude among 750 English Words.

    Science.gov (United States)

    Troche, Joshua; Crutch, Sebastian J; Reilly, Jamie

    2017-01-01

    Cognitive science has a longstanding interest in the ways that people acquire and use abstract vs. concrete words (e.g., truth vs. piano). One dominant theory holds that abstract and concrete words are subserved by two parallel semantic systems. We recently proposed an alternative account of abstract-concrete word representation premised upon a unitary, high dimensional semantic space wherein word meaning is nested. We hypothesize that a range of cognitive and perceptual dimensions (e.g., emotion, time, space, color, size, visual form) bound this space, forming a conceptual topography. Here we report a normative study where we examined the clustering properties of a sample of English words ( N = 750) spanning a spectrum of concreteness in a continuous manner from highly abstract to highly concrete. Participants ( N = 328) rated each target word on a range of 14 cognitive dimensions (e.g., color, emotion, valence, polarity, motion, space). The dimensions reduced to three factors: Endogenous factor, Exogenous factor, and Magnitude factor. Concepts were plotted in a unified, multimodal space with concrete and abstract concepts along a continuous continuum. We discuss theoretical implications and practical applications of this dataset. These word norms are freely available for download and use at http://www.reilly-coglab.com/data/.

  19. The picture superiority effect in patients with Alzheimer’s disease and mild cognitive impairment

    OpenAIRE

    Ally, Brandon A.; Gold, Carl A.; Budson, Andrew E.

    2008-01-01

    The fact that pictures are better remembered than words has been reported in the literature for over 30 years. While this picture superiority effect has been consistently found in healthy young and older adults, no study has directly evaluated the presence of the effect in patients with Alzheimer’s disease (AD) or mild cognitive impairment (MCI). Clinical observations have indicated that pictures enhance memory in these patients, suggesting that the picture superiority effect may be intact. H...

  20. Using Constant Time Delay to Teach Braille Word Recognition

    Science.gov (United States)

    Hooper, Jonathan; Ivy, Sarah; Hatton, Deborah

    2014-01-01

    Introduction: Constant time delay has been identified as an evidence-based practice to teach print sight words and picture recognition (Browder, Ahlbrim-Delzell, Spooner, Mims, & Baker, 2009). For the study presented here, we tested the effectiveness of constant time delay to teach new braille words. Methods: A single-subject multiple baseline…

  1. Semantic interference in picture naming during dual-task performance does not vary with reading ability.

    Science.gov (United States)

    Piai, Vitória; Roelofs, Ardi; Roete, Ingeborg

    2015-01-01

    Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants' reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants' reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants' reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill.

  2. Long-Term Interference at the Semantic Level: Evidence from Blocked-Cyclic Picture Matching

    Science.gov (United States)

    Wei, Tao; Schnur, Tatiana T.

    2016-01-01

    Processing semantically related stimuli creates interference across various domains of cognition, including language and memory. In this study, we identify the locus and mechanism of interference when retrieving meanings associated with words and pictures. Subjects matched a probe stimulus (e.g., cat) to its associated target picture (e.g., yarn)…

  3. Masked form priming in writing words from pictures: evidence for direct retrieval of orthographic codes.

    Science.gov (United States)

    Bonin, P; Fayol, M; Peereman, R

    1998-09-01

    Three experiments used the masked priming paradigm to investigate the role of orthographic and phonological information in written picture naming. In all the experiments, participants had to write the names of pictures as quickly as possible under three different priming conditions. Nonword primes could be: (1) phonologically and orthographically related to the picture name; (2) orthographically related as in (1) but phonologically related to a lesser degree than in (1); (3) orthographically and phonologically unrelated except for the first consonant (or consonant cluster). Orthographic priming effects were observed with a prime exposure duration of 34 ms (Experiments 1 and 2) and of 51 ms (Experiment 3). In none of the experiments, did homophony between primes and picture names yield an additional advantage. Taken together, these findings support the view of the direct retrieval of orthographic information through lexical access in written picture naming, and thus argue against the traditional view that the retrieval of orthographic codes of obligatorily mediated by phonology.

  4. Looking at the bigger research picture | IDRC - International ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    It was more on what research I could do to influence IDRC's work.” ... Send via Email Print. Home · Funding · In their own words: IDRC awardees share their experiences ... “This is what I really wanted to do, to understand the bigger picture.”.

  5. Interfering with free recall of words: Detrimental effects of phonological competition.

    Science.gov (United States)

    Fernandes, Myra A; Wammes, Jeffrey D; Priselac, Sandra; Moscovitch, Morris

    2016-09-01

    We examined the effect of different distracting tasks, performed concurrently during memory retrieval, on recall of a list of words. By manipulating the type of material and processing (semantic, orthographic, and phonological) required in the distracting task, and comparing the magnitude of memory interference produced, we aimed to infer the kind of representation upon which retrieval of words depends. In Experiment 1, identifying odd digits concurrently during free recall disrupted memory, relative to a full attention condition, when the numbers were presented orthographically (e.g. nineteen), but not numerically (e.g. 19). In Experiment 2, a distracting task that required phonological-based decisions to either word or picture material produced large, but equivalent effects on recall of words. In Experiment 3, phonological-based decisions to pictures in a distracting task disrupted recall more than when the same pictures required semantically-based size estimations. In Experiment 4, a distracting task that required syllable decisions to line drawings interfered significantly with recall, while an equally difficult semantically-based color-decision task about the same line drawings, did not. Together, these experiments demonstrate that the degree of memory interference experienced during recall of words depends primarily on whether the distracting task competes for phonological representations or processes, and less on competition for semantic or orthographic or material-specific representations or processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. The organization of words and environmental sounds in memory☆

    Science.gov (United States)

    Hendrickson, Kristi; Walenski, Matthew; Friend, Margaret; Love, Tracy

    2015-01-01

    In the present study we used event-related potentials to compare the organization of linguistic and meaningful nonlinguistic sounds in memory. We examined N400 amplitudes as adults viewed pictures presented with words or environmental sounds that matched the picture (Match), that shared semantic features with the expected match (Near Violation), and that shared relatively few semantic features with the expected match (Far Violation). Words demonstrated incremental N400 amplitudes based on featural similarity from 300–700 ms, such that both Near and Far Violations exhibited significant N400 effects, however Far Violations exhibited greater N400 effects than Near Violations. For environmental sounds, Far Violations but not Near Violations elicited significant N400 effects, in both early (300–400 ms) and late (500–700 ms) time windows, though a graded pattern similar to that of words was seen in the midlatency time window (400–500 ms). These results indicate that the organization of words and environmental sounds in memory is differentially influenced by featural similarity, with a consistently fine-grained graded structure for words but not sounds. PMID:25624059

  7. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  8. Long-term interference at the semantic level: Evidence from blocked-cyclic picture matching.

    Science.gov (United States)

    Wei, Tao; Schnur, Tatiana T

    2016-01-01

    Processing semantically related stimuli creates interference across various domains of cognition, including language and memory. In this study, we identify the locus and mechanism of interference when retrieving meanings associated with words and pictures. Subjects matched a probe stimulus (e.g., cat) to its associated target picture (e.g., yarn) from an array of unrelated pictures. Across trials, probes were either semantically related or unrelated. To test the locus of interference, we presented probes as either words or pictures. If semantic interference occurs at the stage common to both tasks, that is, access to semantic representations, then interference should occur in both probe presentation modalities. Results showed clear semantic interference effects independent of presentation modality and lexical frequency, confirming a semantic locus of interference in comprehension. To test the mechanism of interference, we repeated trials across 4 presentation cycles and manipulated the number of unrelated intervening trials (zero vs. two). We found that semantic interference was additive across cycles and survived 2 intervening trials, demonstrating interference to be long-lasting as opposed to short-lived. However, interference was smaller with zero versus 2 intervening trials, which we interpret to suggest that short-lived facilitation counteracted the long-lived interference. We propose that retrieving meanings associated with words/pictures from the same semantic category yields both interference due to long-lasting changes in connection strength between semantic representations (i.e., incremental learning) and facilitation caused by short-lived residual activation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Preserved conceptual implicit memory for pictures in patients with Alzheimer's disease.

    Science.gov (United States)

    Deason, Rebecca G; Hussey, Erin P; Flannery, Sean; Ally, Brandon A

    2015-10-01

    The current study examined different aspects of conceptual implicit memory in patients with mild Alzheimer's disease (AD). Specifically, we were interested in whether priming of distinctive conceptual features versus general semantic information related to pictures and words would differ for the mild AD patients and healthy older adults. In this study, 14 healthy older adults and 15 patients with mild AD studied both pictures and words followed by an implicit test section, where they were asked about distinctive conceptual or general semantic information related to the items they had previously studied (or novel items). Healthy older adults and patients with mild AD showed both conceptual priming and the picture superiority effect, but the AD patients only showed these effects for the questions focused on the distinctive conceptual information. We found that patients with mild AD showed intact conceptual picture priming in a task that required generating a response (answer) from a cue (question) for cues that focused on distinctive conceptual information. This experiment has helped improve our understanding of both the picture superiority effect and conceptual implicit memory in patients with mild AD in that these findings support the notion that conceptual implicit memory might potentially help to drive familiarity-based recognition in the face of impaired recollection in patients with mild AD. Copyright © 2015. Published by Elsevier Inc.

  10. Richness of information about novel words influences how episodic and semantic memory networks interact during lexicalization.

    Science.gov (United States)

    Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M

    2014-01-01

    The complementary learning systems account of declarative memory suggests two distinct memory networks, a fast-mapping, episodic system involving the hippocampus, and a slower semantic memory system distributed across the neocortex in which new information is gradually integrated with existing representations. In this study, we investigated the extent to which these two networks are involved in the integration of novel words into the lexicon after extensive learning, and how the involvement of these networks changes after 24h. In particular, we explored whether having richer information at encoding influences the lexicalization trajectory. We trained participants with two sets of novel words, one where exposure was only to the words' phonological forms (the form-only condition), and one where pictures of unfamiliar objects were associated with the words' phonological forms (the picture-associated condition). A behavioral measure of lexical competition (indexing lexicalization) indicated stronger competition effects for the form-only words. Imaging (fMRI) results revealed greater involvement of phonological lexical processing areas immediately after training in the form-only condition, suggesting that tight connections were formed between novel words and existing lexical entries already at encoding. Retrieval of picture-associated novel words involved the episodic/hippocampal memory system more extensively. Although lexicalization was weaker in the picture-associated condition, overall memory strength was greater when tested after a 24hour delay, probably due to the availability of both episodic and lexical memory networks to aid retrieval. It appears that, during lexicalization of a novel word, the relative involvement of different memory networks differs according to the richness of the information about that word available at encoding. © 2013.

  11. Effects of context and word class on lexical retrieval in Chinese speakers with anomic aphasia.

    Science.gov (United States)

    Law, Sam-Po; Kong, Anthony Pak-Hin; Lai, Loretta Wing-Shan; Lai, Christy

    2015-01-01

    Differences in processing nouns and verbs have been investigated intensely in psycholinguistics and neuropsychology in past decades. However, the majority of studies examining retrieval of these word classes have involved tasks of single word stimuli or responses. While the results have provided rich information for addressing issues about grammatical class distinctions, it is unclear whether they have adequate ecological validity for understanding lexical retrieval in connected speech which characterizes daily verbal communication. Previous investigations comparing retrieval of nouns and verbs in single word production and connected speech have reported either discrepant performance between the two contexts with presence of word class dissociation in picture naming but absence in connected speech, or null effects of word class. In addition, word finding difficulties have been found to be less severe in connected speech than picture naming. However, these studies have failed to match target stimuli of the two word classes and between tasks on psycholinguistic variables known to affect performance in response latency and/or accuracy. The present study compared lexical retrieval of nouns and verbs in picture naming and connected speech from picture description, procedural description, and story-telling among 19 Chinese speakers with anomic aphasia and their age, gender, and education matched healthy controls, to understand the influence of grammatical class on word production across speech contexts when target items were balanced for confounding variables between word classes and tasks. Elicitation of responses followed the protocol of the AphasiaBank consortium (http://talkbank.org/AphasiaBank/). Target words for confrontation naming were based on well-established naming tests, while those for narrative were drawn from a large database of normal speakers. Selected nouns and verbs in the two contexts were matched for age-of-acquisition (AoA) and familiarity

  12. Multimodality image registration with software: state-of-the-art

    International Nuclear Information System (INIS)

    Slomka, Piotr J.; Baum, Richard P.

    2009-01-01

    Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)

  13. The New Peabody Picture Vocabulary Test-III: An Illusion of Unbiased Assessment?

    Science.gov (United States)

    Stockman, Ida J

    2000-10-01

    This article examines whether changes in the ethnic minority composition of the standardization sample for the latest edition of the Peabody Picture Vocabulary Test (PPVT-III, Dunn & Dunn, 1997) can be used as the sole explanation for children's better test scores when compared to an earlier edition, the Peabody Picture Vocabulary Test-Revised (PPVT-R, Dunn & Dunn, 1981). Results from a comparative analysis of these two test editions suggest that other factors may explain improved performances. Among these factors are the number of words and age levels sampled, the types of words and pictures used, and characteristics of the standardization sample other than its ethnic minority composition. This analysis also raises questions regarding the usefulness of converting scores from one edition to the other and the type of criteria that could be used to evaluate whether the PPVT-III is an unbiased test of vocabulary for children from diverse cultural and linguistic backgrounds.

  14. Could a Multimodal Dictionary Serve as a Learning Tool? An Examination of the Impact of Technologically Enhanced Visual Glosses on L2 Text Comprehension

    Science.gov (United States)

    Sato, Takeshi

    2016-01-01

    This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it…

  15. Development of Multimodal Human Interface Technology

    Science.gov (United States)

    Hirose, Michitaka

    About 20 years have passed since the word “Virtual Reality” became popular. During these two decades, novel human interface technology so called “multimodal interface technology” has been formed. In this paper, firstly, recent progress in realtime CG, BCI and five senses IT is quickly reviewed. Since the life cycle of the information technology is said to be 20 years or so, novel directions and paradigms of VR technology can be found in conjunction with the technologies forementioned. At the end of the paper, these futuristic directions such as ultra-realistic media are briefly introduced.

  16. A picture is worth a thousand words: Electronic cigarette content on Instagram and Pinterest

    Science.gov (United States)

    Lee, Alexander S.; Hart, Joy L.; Sears, Clara G.; Walker, Kandi L.; Siu, Allison; Smith, Courteney

    2017-01-01

    INTRODUCTION This study examined electronic cigarette (e-cig) content in visual materials posted on the social-media platforms Instagram and Pinterest. Both platforms allow users to upload pictures to the internet and share them globally. Users can search for pictures tagged with specific keywords and phrases. METHODS Using content analysis, this study identified themes in image postings of e-cigs on social media. During five weeks of data collection, keywords were used to identify pictures related to e-cigs. These pictures were then coded into one or more categories. RESULTS The three most popular categories for Instagram posts were marketing, customization and juices/flavors. The three most popular categories for Pinterest posts were customization, marketing and memes. CONCLUSIONS Because of the persuasive power of visuals, it is important to examine communication on Instagram and Pinterest as well as the specific visual messages communicated. Stores and manufacturers use these and similar platforms to communicate with users and potential users; thus it seems that marketers are capitalizing on opportunities for persuasive appeal. The results highlight the popularity of e-cig content on these two social media platforms and reveal an emphasis on marketing and customization. PMID:28815224

  17. A picture is worth a thousand words: Electronic cigarette content on Instagram and Pinterest.

    Science.gov (United States)

    Lee, Alexander S; Hart, Joy L; Sears, Clara G; Walker, Kandi L; Siu, Allison; Smith, Courteney

    2017-07-01

    This study examined electronic cigarette (e-cig) content in visual materials posted on the social-media platforms Instagram and Pinterest. Both platforms allow users to upload pictures to the internet and share them globally. Users can search for pictures tagged with specific keywords and phrases. Using content analysis, this study identified themes in image postings of e-cigs on social media. During five weeks of data collection, keywords were used to identify pictures related to e-cigs. These pictures were then coded into one or more categories. The three most popular categories for Instagram posts were marketing, customization and juices/flavors. The three most popular categories for Pinterest posts were customization, marketing and memes. Because of the persuasive power of visuals, it is important to examine communication on Instagram and Pinterest as well as the specific visual messages communicated. Stores and manufacturers use these and similar platforms to communicate with users and potential users; thus it seems that marketers are capitalizing on opportunities for persuasive appeal. The results highlight the popularity of e-cig content on these two social media platforms and reveal an emphasis on marketing and customization.

  18. More than Words: Comics as a Means of Teaching Multiple Literacies

    Science.gov (United States)

    Jacobs, Dale

    2007-01-01

    Historically, comics have been viewed as a "debased or simplified word-based literacy," explains Dale Jacobs, who considers comics to be complex, multimodal texts. Examining Ted Naifeh's "Polly and the Pirates," Jacobs shows how comics can engage students in multiple literacies, furthering meaning-making practices in the classroom and beyond.

  19. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.

    Science.gov (United States)

    Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro

    2018-01-01

    In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  20. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

    Directory of Open Access Journals (Sweden)

    Yoshinobu Hagiwara

    2018-03-01

    Full Text Available In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA. Object recognition results using convolutional neural network (CNN, hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL, and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  1. Levels of processing and picture memory: the physical superiority effect.

    Science.gov (United States)

    Intraub, H; Nicklos, S

    1985-04-01

    Six experiments studied the effect of physical orienting questions (e.g., "Is this angular?") and semantic orienting questions (e.g., "Is this edible?") on memory for unrelated pictures at stimulus durations ranging from 125-2,000 ms. Results ran contrary to the semantic superiority "rule of thumb," which is based primarily on verbal memory experiments. Physical questions were associated with better free recall and cued recall of a diverse set of visual scenes (Experiments 1, 2, and 4). This occurred both when general and highly specific semantic questions were used (Experiments 1 and 2). Similar results were obtained when more simplistic visual stimuli--photographs of single objects--were used (Experiments 5 and 6). As in the case of the semantic superiority effect with words, the physical superiority effect for pictures was eliminated or reversed when the same physical questions were repeated throughout the session (Experiments 4 and 6). Conflicts with results of previous levels of processing experiments with words and nonverbal stimuli (e.g., faces) are explained in terms of the sensory-semantic model (Nelson, Reed, & McEvoy, 1977). Implications for picture memory research and the levels of processing viewpoint are discussed.

  2. Preserved conceptual implicit memory for pictures in patients with Alzheimer’s disease

    Science.gov (United States)

    Deason, Rebecca G.; Hussey, Erin P.; Flannery, Sean; Ally, Brandon A.

    2015-01-01

    The current study examined different aspects of conceptual implicit memory in patients with mild Alzheimer’s disease (AD). Specifically, we were interested in whether priming of distinctive conceptual features versus general semantic information related to pictures and words would differ for the mild AD patients and healthy older adults. In this study, 14 healthy older adults and 15 patients with mild AD studied both pictures and words followed by an implicit test section, where they were asked about distinctive conceptual or general semantic information related to the items they had previously studied (or novel items) Healthy older adults and patients with mild AD showed both conceptual priming and the picture superiority effect, but the AD patients only showed these effects for the questions focused on the distinctive conceptual information. We found that patients with mild AD showed intact conceptual picture priming in a task that required generating a response (answer) from a cue (question) for cues that focused on distinctive conceptual information. This experiment has helped improve our understanding of both the picture superiority effect and conceptual implicit memory in patients with mild AD in that these findings support the notion that conceptual implicit memory might potentially help to drive familiarity-based recognition in the face of impaired recollection in patients with mild AD. PMID:26291521

  3. The paca that roared: Immediate cumulative semantic interference among newly acquired words.

    Science.gov (United States)

    Oppenheim, Gary M

    2018-08-01

    With 40,000 words in the average vocabulary, how can speakers find the specific words that they want so quickly and easily? Cumulative semantic interference in language production provides a clue: when naming a large series of pictures, with a few mammals sprinkled about, naming each subsequent mammal becomes slower and more error-prone. Such interference mirrors predictions from an incremental learning algorithm applied to meaning-driven retrieval from an established vocabulary, suggesting retrieval benefits from a constant, implicit, re-optimization process (Oppenheim et al., 2010). But how quickly would a new mammal (e.g. paca) engage in this re-optimization? In this experiment, 18 participants studied 3 novel and 3 familiar exemplars from each of six semantic categories, and immediately performed a timed picture-naming task. Consistent with the learning model's predictions, naming latencies revealed immediate cumulative semantic interference in all directions: from new words to new words, from new words to old words, from old words to new words, and from old words to old words. Repeating the procedure several days later produced similar-magnitude effects, demonstrating that newly acquired words can be immediately semantically integrated, at least to the extent necessary to produce typical cumulative semantic interference. These findings extend the Dark Side model's scope to include novel word production, and are considered in terms of mechanisms for lexical selection. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Visual Attention to Print-Salient and Picture-Salient Environmental Print in Young Children

    Science.gov (United States)

    Neumann, Michelle M.; Summerfield, Katelyn; Neumann, David L.

    2015-01-01

    Environmental print is composed of words and contextual cues such as logos and pictures. The salience of the contextual cues may influence attention to words and thus the potential of environmental print in promoting early reading development. The present study explored this by presenting pre-readers (n = 20) and beginning readers (n = 16) with…

  5. Effects of context and word class on lexical retrieval in Chinese speakers with anomic aphasia

    Science.gov (United States)

    Law, Sam-Po; Kong, Anthony Pak-Hin; Lai, Loretta Wing-Shan; Lai, Christy

    2014-01-01

    Background Differences in processing nouns and verbs have been investigated intensely in psycholinguistics and neuropsychology in past decades. However, the majority of studies examining retrieval of these word classes have involved tasks of single word stimuli or responses. While the results have provided rich information for addressing issues about grammatical class distinctions, it is unclear whether they have adequate ecological validity for understanding lexical retrieval in connected speech which characterizes daily verbal communication. Previous investigations comparing retrieval of nouns and verbs in single word production and connected speech have reported either discrepant performance between the two contexts with presence of word class dissociation in picture naming but absence in connected speech, or null effects of word class. In addition, word finding difficulties have been found to be less severe in connected speech than picture naming. However, these studies have failed to match target stimuli of the two word classes and between tasks on psycholinguistic variables known to affect performance in response latency and/or accuracy. Aims The present study compared lexical retrieval of nouns and verbs in picture naming and connected speech from picture description, procedural description, and story-telling among 19 Chinese speakers with anomic aphasia and their age, gender, and education matched healthy controls, to understand the influence of grammatical class on word production across speech contexts when target items were balanced for confounding variables between word classes and tasks. Methods & Procedures Elicitation of responses followed the protocol of the AphasiaBank consortium (http://talkbank.org/AphasiaBank/). Target words for confrontation naming were based on well-established naming tests, while those for narrative were drawn from a large database of normal speakers. Selected nouns and verbs in the two contexts were matched for age

  6. Tracing attention and the activation flow in spoken word planning using eye-movements

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements

  7. The New Oxford Picture Dictionary, English/Navajo Edition.

    Science.gov (United States)

    Parnwell, E. C.

    This picture dictionary illustrates over 2,400 words. The dictionary is organized thematically, beginning with topics most useful for the survival needs of students in an English speaking country. However, teachers may adapt the order to reflect the needs of their students. Verbs are included on separate pages, but within topic areas in which they…

  8. Multimodality image registration with software: state-of-the-art

    Energy Technology Data Exchange (ETDEWEB)

    Slomka, Piotr J. [Cedars-Sinai Medical Center, AIM Program/Department of Imaging, Los Angeles, CA (United States); University of California, David Geffen School of Medicine, Los Angeles, CA (United States); Baum, Richard P. [Center for PET, Department of Nuclear Medicine, Bad Berka (Germany)

    2009-03-15

    Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)

  9. A picture is worth a thousand words: Electronic cigarette content on Instagram and Pinterest

    OpenAIRE

    Alexander S. Lee; Joy L. Hart; Clara G Sears; Kandi L Walker; Allison Siu; Courteney Smith

    2017-01-01

    Introduction This study examined electronic cigarette (e-cig) content in visual materials posted on the social-media platforms Instagram and Pinterest. Both platforms allow users to upload pictures to the internet and share them globally. Users can search for pictures tagged with specific keywords and phrases. Methods Using content analysis, this study identified themes in image postings of e-cigs on social media. During five weeks of data collection, keywords were used ...

  10. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  11. Typing pictures: Linguistic processing cascades into finger movements.

    Science.gov (United States)

    Scaltritti, Michele; Arfé, Barbara; Torrance, Mark; Peressotti, Francesca

    2016-11-01

    The present study investigated the effect of psycholinguistic variables on measures of response latency and mean interkeystroke interval in a typewritten picture naming task, with the aim to outline the functional organization of the stages of cognitive processing and response execution associated with typewritten word production. Onset latencies were modulated by lexical and semantic variables traditionally linked to lexical retrieval, such as word frequency, age of acquisition, and naming agreement. Orthographic variables, both at the lexical and sublexical level, appear to influence just within-word interkeystroke intervals, suggesting that orthographic information may play a relevant role in controlling actual response execution. Lexical-semantic variables also influenced speed of execution. This points towards cascaded flow of activation between stages of lexical access and response execution. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach

    OpenAIRE

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy

    2012-01-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual varia...

  13. Symbolic Understanding of Pictures in Low-Functioning Children with Autism: The Effects of Iconicity and Naming

    Science.gov (United States)

    Hartley, Calum; Allen, Melissa L.

    2015-01-01

    This research investigated whether symbolic understanding of pictures in low-functioning children with autism is mediated by iconicity and language. In Experiment 1, participants were taught novel words paired with unfamiliar pictures that varied in iconicity (black-and-white line drawings, greyscale photographs, colour line drawings, colour…

  14. Reversing the picture superiority effect: a speed-accuracy trade-off study of recognition memory.

    Science.gov (United States)

    Boldini, Angela; Russo, Riccardo; Punia, Sahiba; Avons, S E

    2007-01-01

    Speed-accuracy trade-off methods have been used to contrast single- and dual-process accounts of recognition memory. With these procedures, subjects are presented with individual test items and required to make recognition decisions under various time constraints. In three experiments, we presented words and pictures to be intentionally learned; test stimuli were always visually presented words. At test, we manipulated the interval between the presentation of each test stimulus and that of a response signal, thus controlling the amount of time available to retrieve target information. The standard picture superiority effect was significant in long response deadline conditions (i.e., > or = 2,000 msec). Conversely, a significant reverse picture superiority effect emerged at short response-signal deadlines (< 200 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory. Alternative accounts are also discussed.

  15. A neural network model of semantic memory linking feature-based object representation and words.

    Science.gov (United States)

    Cuppini, C; Magosso, E; Ursino, M

    2009-06-01

    Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).

  16. Acquired Affective Associations Induce Emotion Effects in Word Recognition: An ERP Study

    Science.gov (United States)

    Fritsch, Nathalie; Kuchinke, Lars

    2013-01-01

    The present study examined how contextual learning and in particular emotionality conditioning impacts the neural processing of words, as possible key factors for the acquisition of words' emotional connotation. 21 participants learned on five consecutive days associations between meaningless pseudowords and unpleasant or neutral pictures using an…

  17. Interplay of the production and picture superiority effects: a signal detection analysis.

    Science.gov (United States)

    Fawcett, Jonathan M; Quinlan, Chelsea K; Taylor, Tracy L

    2012-01-01

    Three experiments explored the interaction between the production effect (greater memory for produced compared to non-produced study items) and the picture superiority effect (greater memory for pictures compared to words). Pictures and words were presented in a blocked (E1) or mixed (E2, E3) design, each accompanied by an instruction to silently name (non-produced condition) or quietly mouth (produced condition) the corresponding referent. Memory was then tested for all study items as well as an equal number of foil items using a speeded (E1, E2) or self-paced (E3) yes-no recognition task. Experiments 1, 2, and 3 all revealed a small but reliable production × stimulus interaction. Production was also found to result in a liberal shift in response bias that could result in the overestimation of the production effect when measured using hits instead of sensitivity. Together our findings suggest that the application of multiple distinctive processes at study produces an especially discriminative memory trace at test, more so than the summation of each process individually.

  18. Asymmetrical Switch Costs in Bilingual Language Production Induced by Reading Words

    Science.gov (United States)

    Peeters, David; Runnqvist, Elin; Bertrand, Daisy; Grainger, Jonathan

    2014-01-01

    We examined language-switching effects in French-English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language…

  19. Picture superiority in free recall: the effects of normal aging and primary degenerative dementia.

    Science.gov (United States)

    Rissenberg, M; Glanzer, M

    1986-01-01

    A key factor in the decline of memory with age may be a breakdown of communication in the information network involved in memory and cognitive processing. A special case of this communication is assumed to underlie the picture superiority effect in recall. From this hypothesis it follows that the picture superiority effect should lessen with age. In Experiment 1, three groups of adults (young, old normal, and old memory-impaired) were tested in free recall of pictures and word lists. As predicted, the picture superiority effect declined with age. Experiment 2 replicated these findings and showed, moreover, that the picture superiority effect can be reestablished in normal old adults by instructing them to verbalize overtly during item presentation.

  20. Correction of distortion of MR pictures for MR-guided robotic sterotactic procedures

    International Nuclear Information System (INIS)

    Jonckheere, E.A.; Kwoh, Y.S.

    1988-01-01

    Ever since magnetic resonance (MR) invaded the medical imaging field, it has played an increasingly important role and is even currently being considered for stereotactic guidance of probes in the brain. While MR pictures indeed convey more clinical information than CT, the geometry of MR pictures is, unfortunately, not as accurate as the geometry of CT pictures. In other words, if a square grid phantom is scanned, then the CT picture will show a square grid, while the MR picture will rather reveal a distorted grid. This distortion is primarily due to small variations in the static magnetic field. This small distortion does not impede radiological diagnosis; however, it is a source of concern if one contemplates utilizing the MR pictures for accurate stereotactic positioning of a probe at a very precise point in the brain. Another area of application where the distortion of the MR picture should be compensated for is the superposition of CT and MR pictures so that both informations could be used for diagnosis or stereotactic purposes. This paper essentially addresses the nonlinear distortion of MR pictures and how it could be compensated for through software manipulation of the MR picture

  1. Searching for the right word: Hybrid visual and memory search for words.

    Science.gov (United States)

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order

  2. Children's Incidental Memory for Pictures: Item Processing Versus List Organizations.

    Science.gov (United States)

    Ghatala, Elizabeth S.; Levin, Joel R.

    1981-01-01

    Two experiments which tested recall differences among young children indicated: (1) organizational factors, not item processing per se, influenced previously found differences in children's recall of pictures following semantic and physical orienting tasks; and (2) physical orienting tasks may effectively inhibit subjects' processing of words, but…

  3. The Effects of Word Length on Memory for Pictures: Evidence for Speech Coding in Young Children.

    Science.gov (United States)

    Hulme, Charles; And Others

    1986-01-01

    Three experiments demonstrate that children four to ten years old, when presented with a series recall task with pictures of common objects having short or long names, showed consistently better recall of pictures with short names. (HOD)

  4. Systemic multimodal approach to speech therapy treatment in autistic children.

    Science.gov (United States)

    Tamas, Daniela; Marković, Slavica; Milankov, Vesela

    2013-01-01

    Conditions in which speech therapy treatment is applied in autistic children are often not in accordance with characteristics of opinions and learning of people with autism. A systemic multimodal approach means motivating autistic people to develop their language speech skill through the procedure which allows reliving of their personal experience according to the contents that are presented in the their natural social environment. This research was aimed at evaluating the efficiency of speech treatment based on the systemic multimodal approach to the work with autistic children. The study sample consisted of 34 children, aged from 8 to 16 years, diagnosed to have different autistic disorders, whose results showed a moderate and severe clinical picture of autism on the Childhood Autism Rating Scale. The applied instruments for the evaluation of ability were the Childhood Autism Rating Scale and Ganzberg II test. The study subjects were divided into two groups according to the type of treatment: children who were covered by the continuing treatment and systemic multimodal approach in the treatment, and children who were covered by classical speech treatment. It is shown that the systemic multimodal approach in teaching autistic children affects the stimulation of communication, socialization, self-service and work as well as that the progress achieved in these areas of functioning was retainable after long time, too. By applying the systemic multimodal approach when dealing with autistic children and by comparing their achievements on tests applied before, during and after the application of this mode, it has been concluded that certain improvement has been achieved in the functionality within the diagnosed category. The results point to a possible direction in the creation of new methods, plans and programs in dealing with autistic children based on empirical and interactive learning.

  5. Tracing Attention and the Activation Flow of Spoken Word Planning Using Eye Movements

    Science.gov (United States)

    Roelofs, Ardi

    2008-01-01

    The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The…

  6. The role of grammatical category information in spoken word retrieval.

    Science.gov (United States)

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production.

  7. Font size matters--emotion and attention in cortical responses to written words.

    Science.gov (United States)

    Bayer, Mareike; Sommer, Werner; Schacht, Annekathrin

    2012-01-01

    For emotional pictures with fear-, disgust-, or sex-related contents, stimulus size has been shown to increase emotion effects in attention-related event-related potentials (ERPs), presumably reflecting the enhanced biological impact of larger emotion-inducing pictures. If this is true, size should not enhance emotion effects for written words with symbolic and acquired meaning. Here, we investigated ERP effects of font size for emotional and neutral words. While P1 and N1 amplitudes were not affected by emotion, the early posterior negativity started earlier and lasted longer for large relative to small words. These results suggest that emotion-driven facilitation of attention is not necessarily based on biological relevance, but might generalize to stimuli with arbitrary perceptual features. This finding points to the high relevance of written language in today's society as an important source of emotional meaning.

  8. Word Order in Russian Sign Language

    Science.gov (United States)

    Kimmelman, Vadim

    2012-01-01

    In this paper the results of an investigation of word order in Russian Sign Language (RSL) are presented. A small corpus of narratives based on comic strips by nine native signers was analyzed and a picture-description experiment (based on Volterra et al. 1984) was conducted with six native signers. The results are the following: the most frequent…

  9. Phonological, visual, and semantic coding strategies and children's short-term picture memory span

    OpenAIRE

    Henry, L.; Messer, D. J.; Luger-Klein, S.; Crane, L.

    2012-01-01

    Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust an...

  10. The effect of recall, reproduction, and restudy on word learning: a pre-registered study.

    Science.gov (United States)

    Krishnan, Saloni; Watkins, Kate E; Bishop, Dorothy V M

    2017-08-04

    Certain manipulations, such as testing oneself on newly learned word associations (recall), or the act of repeating a word during training (reproduction), can lead to better learning and retention relative to simply providing more exposure to the word (restudy). Such benefit has been observed for written words. Here, we test how these training manipulations affect learning of words presented aurally, when participants are required to produce these novel phonological forms in a recall task. Participants (36 English-speaking adults) learned 27 pseudowords, which were paired with 27 unfamiliar pictures. They were given cued recall practice for 9 of the words, reproduction practice for another set of 9 words, and the remaining 9 words were restudied. Participants were tested on their recognition (3-alternative forced choice) and recall (saying the pseudoword in response to a picture) of these items immediately after training, and a week after training. Our hypotheses were that reproduction and restudy practice would lead to better learning immediately after training, but that cued recall practice would lead to better retention in the long term. In all three conditions, recognition performance was extremely high immediately after training, and a week following training, indicating that participants had acquired associations between the novel pictures and novel words. In addition, recognition and cued recall performance was better immediately after training relative to a week later, confirming that participants forgot some words over time. However, results in the cued recall task did not support our hypotheses. Immediately after training, participants showed an advantage for cued Recall over the Restudy condition, but not over the Reproduce condition. Furthermore, there was no boost for the cued Recall condition over time relative to the other two conditions. Results from a Bayesian analysis also supported this null finding. Nonetheless, we found a clear effect of word

  11. The low-frequency encoding disadvantage: Word frequency affects processing demands.

    Science.gov (United States)

    Diana, Rachel A; Reder, Lynne M

    2006-07-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in addition to the advantage of low-frequency words at retrieval, there is a low-frequency disadvantage during encoding. That is, low-frequency words require more processing resources to be encoded episodically than high-frequency words. Under encoding conditions in which processing resources are limited, low-frequency words show a larger decrement in recognition than high-frequency words. Also, studying items (pictures and words of varying frequencies) along with low-frequency words reduces performance for those stimuli. Copyright 2006 APA, all rights reserved.

  12. Lexical and semantic representations in the acquisition of L2 cognate and non-cognate words: evidence from two learning methods in children.

    Science.gov (United States)

    Comesaña, Montserrat; Soares, Ana Paula; Sánchez-Casas, Rosa; Lima, Cátia

    2012-08-01

    How bilinguals represent words in two languages and which mechanisms are responsible for second language acquisition are important questions in the bilingual and vocabulary acquisition literature. This study aims to analyse the effect of two learning methods (picture- vs. word-based method) and two types of words (cognates and non-cognates) in early stages of children's L2 acquisition. Forty-eight native speakers of European Portuguese, all sixth graders (mean age = 10.87 years; SD= 0.85), participated in the study. None of them had prior knowledge of Basque (the L2 in this study). After a learning phase in which L2 words were learned either by a picture- or a word-based method, children were tested in a backward-word translation recognition task at two times (immediately vs. one week later). Results showed that the participants made more errors when rejecting semantically related than semantically unrelated words as correct translations (semantic interference effect). The magnitude of this effect was higher in the delayed test condition regardless of the learning method. Moreover, the overall performance of participants from the word-based method was better than the performance of participants from the picture-word method. Results were discussed concerning the most significant bilingual lexical processing models. ©2011 The British Psychological Society.

  13. Food-related attentional bias. Word versus pictorial stimuli and the importance of stimuli calorific value in the dot probe task.

    Science.gov (United States)

    Freijy, Tanya; Mullan, Barbara; Sharpe, Louise

    2014-12-01

    The primary aim of this study was to extend previous research on food-related attentional biases by examining biases towards pictorial versus word stimuli, and foods of high versus low calorific value. It was expected that participants would demonstrate greater biases to pictures over words, and to high-calorie over low-calorie foods. A secondary aim was to examine associations between BMI, dietary restraint, external eating and attentional biases. It was expected that high scores on these individual difference variables would be associated with a bias towards high-calorie stimuli. Undergraduates (N = 99) completed a dot probe task including matched word and pictorial food stimuli in a controlled setting. Questionnaires assessing eating behaviour were administered, and height and weight were measured. Contrary to predictions, there were no main effects for stimuli type (pictures vs words) or calorific value (high vs low). There was, however, a significant interaction effect suggesting a bias towards high-calorie pictures, but away from high-calorie words; and a bias towards low-calorie words, but away from low-calorie pictures. No associations between attentional bias and any of the individual difference variables were found. The presence of a stimulus type by calorific value interaction demonstrates the importance of stimuli type in the dot probe task, and may help to explain inconsistencies in prior research. Further research is needed to clarify associations between attentional bias and BMI, restraint, and external eating. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Digitalization and the Production of Feeling and Emotion: The Case of Words Cut into the Skin

    Directory of Open Access Journals (Sweden)

    Sternudd Hans T.

    2015-08-01

    Full Text Available This article investigates one example of how affect is articulated in the self-cutting of words into the skin and how the meaning of this multimodal [statement is modified through remediation. According to Tomkins, affects are understood as intensities that are impossible to frame as feelings or emotions. A theoretical framework based on Laclau’s and Mouffe’s discourse theory and the multimodal categories developed by Kress and van Leeuwen is used. Photographs of self-cutting and statements from people who cut themselves are examined through content analyses. The results show that words that had been cut into the skin often referred to painful experiences, disgust directed against themselves, or social isolation. Further, the study shows that when the cut-in words are remediated through a photograph, digitalized and published online, other meanings appear. Inside internet communities for people who self-injure, the photographs were associated with a communal experience, identification and prescribed activity. The original self-oriented feelings about one’s shortcomings and isolation attached to self-cutting could be altered so that those connoted, instead, experiences of solidarity, identity and intimacy.

  15. Tone of voice guides word learning in informative referential contexts.

    Science.gov (United States)

    Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C

    2013-06-01

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., "daxen") spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.

  16. Single-Word Predictions of Upcoming Language During Comprehension: Evidence from the Cumulative Semantic Interference Task

    Science.gov (United States)

    Kleinman, Daniel; Runnqvist, Elin; Ferreira, Victor S.

    2015-01-01

    Comprehenders predict upcoming speech and text on the basis of linguistic input. How many predictions do comprehenders make for an upcoming word? If a listener strongly expects to hear the word “sock”, is the word “shirt” partially expected as well, is it actively inhibited, or is it ignored? The present research addressed these questions by measuring the “downstream” effects of prediction on the processing of subsequently presented stimuli using the cumulative semantic interference paradigm. In three experiments, subjects named pictures (sock) that were presented either in isolation or after strongly constraining sentence frames (“After doing his laundry, Mark always seemed to be missing one…”). Naming sock slowed the subsequent naming of the picture shirt – the standard cumulative semantic interference effect. However, although picture naming was much faster after sentence frames, the interference effect was not modulated by the context (bare vs. sentence) in which either picture was presented. According to the only model of cumulative semantic interference that can account for such a pattern of data, this indicates that comprehenders pre-activated and maintained the pre-activation of best sentence completions (sock) but did not maintain the pre-activation of less likely completions (shirt). Thus, comprehenders predicted only the most probable completion for each sentence. PMID:25917550

  17. Design and Applications of a Multimodality Image Data Warehouse Framework

    Science.gov (United States)

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  18. Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

    Directory of Open Access Journals (Sweden)

    Akira Taniguchi

    2017-12-01

    Full Text Available In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color. This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method.

  19. Do Natural Pictures Mean Natural Tastes?

    DEFF Research Database (Denmark)

    Smith, Viktor; Barratt, Daniel; Sørensen, Henrik Selsøe

    2015-01-01

    A widespread assumption in Danish consumer law is that if the package of a food product carries a picture of a potentially taste-giving ingredient (say, a strawberry), then consumers will expect the corresponding taste to stem primarily from that ingredient rather than from artificial flavouring....... However, this is not expected to be the case if the packaging carries only a verbal indication of the potential ingredient (say, the word strawberry). We put these assumptions to experimental test. Our goal was to contribute firmer evidence to the legal decision-making in the present field while...

  20. Multimodality image analysis work station

    International Nuclear Information System (INIS)

    Ratib, O.; Huang, H.K.

    1989-01-01

    The goal of this project is to design and implement a PACS (picture archiving and communication system) workstation for quantitative analysis of multimodality images. The Macintosh II personal computer was selected for its friendly user interface, its popularity among the academic and medical community, and its low cost. The Macintosh operates as a stand alone workstation where images are imported from a central PACS server through a standard Ethernet network and saved on a local magnetic or optical disk. A video digitizer board allows for direct acquisition of images from sonograms or from digitized cine angiograms. The authors have focused their project on the exploration of new means of communicating quantitative data and information through the use of an interactive and symbolic user interface. The software developed includes a variety of image analysis, algorithms for digitized angiograms, sonograms, scintigraphic images, MR images, and CT scans

  1. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  2. Phonological, visual, and semantic coding strategies and children's short-term picture memory span.

    Science.gov (United States)

    Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura

    2012-01-01

    Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.

  3. Neural Pattern Similarity in the Left IFG and Fusiform Is Associated with Novel Word Learning

    Directory of Open Access Journals (Sweden)

    Jing Qu

    2017-08-01

    Full Text Available Previous studies have revealed that greater neural pattern similarity across repetitions is associated with better subsequent memory. In this study, we used an artificial language training paradigm and representational similarity analysis to examine whether neural pattern similarity across repetitions before training was associated with post-training behavioral performance. Twenty-four native Chinese speakers were trained to learn a logographic artificial language for 12 days and behavioral performance was recorded using the word naming and picture naming tasks. Participants were scanned while performing a passive viewing task before training, after 4-day training and after 12-day training. Results showed that pattern similarity in the left pars opercularis (PO and fusiform gyrus (FG before training was negatively associated with reaction time (RT in both word naming and picture naming tasks after training. These results suggest that neural pattern similarity is an effective neurofunctional predictor of novel word learning in addition to word memory.

  4. Neural Pattern Similarity in the Left IFG and Fusiform Is Associated with Novel Word Learning

    Science.gov (United States)

    Qu, Jing; Qian, Liu; Chen, Chuansheng; Xue, Gui; Li, Huiling; Xie, Peng; Mei, Leilei

    2017-01-01

    Previous studies have revealed that greater neural pattern similarity across repetitions is associated with better subsequent memory. In this study, we used an artificial language training paradigm and representational similarity analysis to examine whether neural pattern similarity across repetitions before training was associated with post-training behavioral performance. Twenty-four native Chinese speakers were trained to learn a logographic artificial language for 12 days and behavioral performance was recorded using the word naming and picture naming tasks. Participants were scanned while performing a passive viewing task before training, after 4-day training and after 12-day training. Results showed that pattern similarity in the left pars opercularis (PO) and fusiform gyrus (FG) before training was negatively associated with reaction time (RT) in both word naming and picture naming tasks after training. These results suggest that neural pattern similarity is an effective neurofunctional predictor of novel word learning in addition to word memory. PMID:28878640

  5. Gender Equity in Picture Books in Preschool Classrooms: An Exploratory Study.

    Science.gov (United States)

    Patt, Michelle B.; McBride, Brent A.

    A study examined the frequency with which males and females are represented in picture books available in preschool classrooms. Three areas were examined: pronoun usage and gender of characters; the frequency of gender-neutral pronouns and characters; and written text compared to teachers' wording when reading aloud. The study involved 11 head and…

  6. Multimodality and Ambient Intelligence

    NARCIS (Netherlands)

    Nijholt, Antinus; Verhaegh, W.; Aarts, E.; Korst, J.

    2004-01-01

    In this chapter we discuss multimodal interface technology. We present eexamples of multimodal interfaces and show problems and opportunities. Fusion of modalities is discussed and some roadmap discussions on research in multimodality are summarized. This chapter also discusses future developments

  7. Language production in a shared task: Cumulative semantic interference from self- and other-produced context words

    NARCIS (Netherlands)

    Hoedemaker, R.S.; Ernst, J.; Meyer, A.S.; Belke, E.

    2017-01-01

    This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this

  8. Professional Music Training and Novel Word Learning: From Faster Semantic Encoding to Longer-lasting Word Representations.

    Science.gov (United States)

    Dittinger, Eva; Barbaroux, Mylène; D'Imperio, Mariapaola; Jäncke, Lutz; Elmer, Stefan; Besson, Mireille

    2016-10-01

    On the basis of previous results showing that music training positively influences different aspects of speech perception and cognition, the aim of this series of experiments was to test the hypothesis that adult professional musicians would learn the meaning of novel words through picture-word associations more efficiently than controls without music training (i.e., fewer errors and faster RTs). We also expected musicians to show faster changes in brain electrical activity than controls, in particular regarding the N400 component that develops with word learning. In line with these hypotheses, musicians outperformed controls in the most difficult semantic task. Moreover, although a frontally distributed N400 component developed in both groups of participants after only a few minutes of novel word learning, in musicians this frontal distribution rapidly shifted to parietal scalp sites, as typically found for the N400 elicited by known words. Finally, musicians showed evidence for better long-term memory for novel words 5 months after the main experimental session. Results are discussed in terms of cascading effects from enhanced perception to memory as well as in terms of multifaceted improvements of cognitive processing due to music training. To our knowledge, this is the first report showing that music training influences semantic aspects of language processing in adults. These results open new perspectives for education in showing that early music training can facilitate later foreign language learning. Moreover, the design used in the present experiment can help to specify the stages of word learning that are impaired in children and adults with word learning difficulties.

  9. Meaning between, in and around Words, Gestures and Postures--Multimodal Meaning-Making in Children's Classroom Discourse

    Science.gov (United States)

    Taylor, Roberta

    2014-01-01

    The view of language from a social semiotic perspective is clear. Language is one of many semiotic resources we employ in our communicative practices. That is to say that while language is at times dominant, it always operates within a multimodal frame and furthermore, at times modes other than language are dominant. The 2014 National Curriculum…

  10. Revisiting persuasion in oral academic and professional genres: Towards a methodological framework for Multimodal Discourse Analysis of research dissemination talks

    Directory of Open Access Journals (Sweden)

    Julia Valeiras-Jurado

    2018-04-01

    Full Text Available Previous work on oral genres (Kress & Van Leeuwen, 2001; Kress, 2010; Bateman, 2011 as well as on persuasion (O’Keefe, 2002; Perloff, 2003; Poggi & Pelachaud, 2008 has indicated that effective persuasive oral communication depends heavily on the use of a wide range of different semiotic modes including words, gestures and intonation. However, little attention has been paid so far to how speakers convey their communicative intentions orchestrating different modes into a coherent multimodal ensemble (Kress, 2010. In this paper we propose a methodological framework for Multimodal Discourse Analysis (MDA of persuasion in oral academic and professional genres. Drawing on previous studies on persuasion (Fuertes-Olivera et al., 2001; O’Keefe, 2002; Perloff, 2003; Virtanen & Halmari, 2005; Dafouz-Milne, 2008, our framework combines earlier proposals for MDA (Querol-Julián, 2011; Querol-Julián & Fortanet-Gómez, 2014 with an ethnographic perspective (Rubin & Rubin, 1995. Our study focuses specifically on the analysis of persuasive strategies used in dissemination talks. The proposed MDA caters for the following modes: words, intonation, head movements and gestures. Preliminary findings hint at a relation between persuasion and so-called modal density (Norris, 2004. Finally, we propose a tentative taxonomy of persuasive strategies and how they are realised multimodally.

  11. Setting the Alarm: Word Emotional Attributes Require Consolidation to be Operational.

    Science.gov (United States)

    Dumay, Nicolas; Sharma, Dinkar; Kellen, Nora; Abdelrahim, Sarah

    2018-01-25

    Demonstrations of emotional Stroop in conditioned made-up words are flawed because of the lack of task ensuring similar word encoding across conditions. Here, participants were trained on associations between made-up words (e.g., 'drott') and pictures with an alarming or neutral content (e.g., 'a dead sheep' vs. 'a munching cow') in a situation that required attention to both ends of each association. To test whether word emotional attributes need to consolidate before they can hijack attention, one set of associations was learned seven days before the test, whereas the other set was learned either six hrs or immediately before the test. The novel words' ability to evoke their emotional attributes was assessed by using both Stroop and an auditory analogue called pause detection. Matching words and pictures was harder for alarming associations. However, similar learning rate and forgetting at seven days were observed for both types of associations. Pause detection revealed no emotion effect for same-day (i.e., unconsolidated) associations, but robust interference for seven-day-old (i.e., consolidated) alarming associations. Attention capture was found in the emotional Stroop as well, though only when trial n-1 referred to a same-day association. This task also showed stronger response repetition priming (independently of emotion) when trials n and n-1 both tapped into seven-day-old associations. Word emotional attributes hence take between six hrs and seven days to be operational. Moreover, age interactions between consecutive trials can be used to gauge implicitly the indirect (relational) episodic associations that develop in the meantime between the memories of individual items. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Dynamics of Semantic and Word-Formation Subsystems of the Russian Language: Historical Dynamics of the Word Family

    Directory of Open Access Journals (Sweden)

    Olga Ivanovna Dmitrieva

    2015-09-01

    Full Text Available The article provides comprehensive justification of the principles and methods of the synchronic and diachronic research of word-formation subsystems of the Russian language. The authors also study the ways of analyzing historical dynamics of word family as the main macro-unit of word-formation system. In the field of analysis there is a family of words with the stem 'ход-' (the meaning of 'motion', word-formation of which is investigated in different periods of the Russian literary language. Significance of motion-verbs in the process of forming a language picture of the world determined the character and the structure of this word family as one of the biggest in the history of the Russian language. In the article a structural and semantic dynamics of the word family 'ход-' is depicted. The results of the study show that in the ancient period the prefixes of verbal derivatives were formed, which became the apex-branched derivational paradigms existing in modern Russian. The old Russian period of language development is characterized by the appearance of words with connotative meaning (with suffixes -ishk-, -ichn-, as well as the words with possessive semantics (with suffixes –ev-, -sk-. In this period the verbs with the postfix -cz also supplement the analyzed word family. The period of formation of the National Russian language was marked by the loss of a large number of abstract nouns and the appearance of neologisms from some old Russian abstract nouns. The studied family in the modern Russian language is characterized by the following processes: the appearance of terms, the active semantic derivation, the weakening of word-formation variability, the semantic differentiation of duplicate units, the development of subsystem of words with connotative meanings, and the preservation of derivatives in all functional styles.

  13. mPano: cloud-based mobile panorama view from single picture

    Science.gov (United States)

    Li, Hongzhi; Zhu, Wenwu

    2013-09-01

    Panorama view provides people an informative and natural user experience to represent the whole scene. The advances on mobile augmented reality, mobile-cloud computing, and mobile internet can enable panorama view on mobile phone with new functionalities, such as anytime anywhere query where a landmark picture is and what the whole scene looks like. To generate and explore panorama view on mobile devices faces significant challenges due to the limitations of computing capacity, battery life, and memory size of mobile phones, as well as the bandwidth of mobile Internet connection. To address the challenges, this paper presents a novel cloud-based mobile panorama view system that can generate and view panorama-view on mobile devices from a single picture, namely "Pano". In our system, first, we propose a novel iterative multi-modal image retrieval (IMIR) approach to get spatially adjacent images using both tag and content information from the single picture. Second, we propose a cloud-based parallel server synthing approach to generate panorama view in cloud, against today's local-client synthing approach that is almost impossible for mobile phones. Third, we propose predictive-cache solution to reduce latency of image delivery from cloud server to the mobile client. We have built a real mobile panorama view system and perform experiments. The experimental results demonstrated the effectiveness of our system and the proposed key component technologies, especially for landmark images.

  14. An Initial Investigation of the Neural Correlates of Word Processing in Preschoolers With Specific Language Impairment.

    Science.gov (United States)

    Haebig, Eileen; Leonard, Laurence; Usler, Evan; Deevy, Patricia; Weber, Christine

    2018-03-15

    Previous behavioral studies have found deficits in lexical-semantic abilities in children with specific language impairment (SLI), including reduced depth and breadth of word knowledge. This study explored the neural correlates of early emerging familiar word processing in preschoolers with SLI and typical development. Fifteen preschoolers with typical development and 15 preschoolers with SLI were presented with pictures followed after a brief delay by an auditory label that did or did not match. Event-related brain potentials were time locked to the onset of the auditory labels. Children provided verbal judgments of whether the label matched the picture. There were no group differences in the accuracy of identifying when pictures and labels matched or mismatched. Event-related brain potential data revealed that mismatch trials elicited a robust N400 in both groups, with no group differences in mean amplitude or peak latency. However, the typically developing group demonstrated a more robust late positive component, elicited by mismatch trials. These initial findings indicate that lexical-semantic access of early acquired words, indexed by the N400, does not differ between preschoolers with SLI and typical development when highly familiar words are presented in isolation. However, the typically developing group demonstrated a more mature profile of postlexical reanalysis and integration, indexed by an emerging late positive component. The findings lay the necessary groundwork for better understanding processing of newly learned words in children with SLI.

  15. Reduced negativity effect in older adults' memory for emotional pictures: the heterogeneity-homogeneity list paradigm.

    Science.gov (United States)

    Grühn, Daniel; Scheibe, Susanne; Baltes, Paul B

    2007-09-01

    Using the heterogeneity-homogeneity list paradigm, the authors investigated 48 young adults' (20-30 years) and 48 older adults' (65-75 years) recognition memory for emotional pictures. The authors obtained no evidence for a positivity bias in older adults' memory: Age differences were primarily driven by older adults' diminished ability to remember negative pictures. The authors further found a strong effect of list types: Pictures, particularly neutral ones, were better recognized in homogeneous (blocked) lists than in heterogeneous (mixed) ones. Results confirm those of a previous study by D. Grühn, J. Smith, and P. B. Baltes (2005) that used a different type of to-be-remembered material, that is, pictures instead of words. (PsycINFO Database Record (c) 2007 APA, all rights reserved).

  16. Binary Opposition Sociolinguistic Picture of the World (on the Material of Modern English Language

    Directory of Open Access Journals (Sweden)

    Natalia B. Boyeva-Omelechko

    2017-03-01

    Full Text Available The article is topical due to the interest of scientists to the problems of interaction of society and language and the role of language in constructing and reconstructing political reality. The authors of the article introduce the notion of the sociolinguistic picture of the world. This picture reflects different aspects of the phenomenon «society» with the help of words which may be called «sociolinguisms». The words in question cover such spheres of society as economical, social, political and spiritual ones. The authors put forward the idea that binary oppositions are typical of the sociolinguistic picture of the world because social phenomena usually contain interdependent and at the same time mutually excluding each other sides which are revealed by the conscience and reflecteded in the language. Unfortunately only some of these oppositions are represented in the dictionaries of synonyms and antonyms. The authors stress that their range is much wider and can be described on the basis of different types of general and explanatory dictionaries with the help of linguistic methods of investigation. Special attention is paid to national peculiarities and axiological properties of antonyms-sociolinguisms.

  17. Modality. Commitment, Truth Value and Reality Claims Across Modes in Multimodal Novels

    DEFF Research Database (Denmark)

    Nørgaard, Nina

    2010-01-01

    to the description and analysis of literary texts which – in addition to wording – make use of other semiotic modes such as typography, visual images, colour and layout for their meaning-making. The approach to multimodality deployed and examined is that proposed, for instance, by Kress and van Leeuwen (e.g. 1996...... to the analysis of two explicitly multimodal novels, with particular focus on the realisation of modality in visual images and typography. The texts put up for analysis are Alexander Masters’ Stuart. A Life Backwards and Jonathan Safran Foer’s Extremely Loud and Incredibly Close. While Kress and van Leeuwen...... of Masters’ and Foer’s sporadic use of special typography, in turn, reveals that although some of Kress and van Leeuwen’s modality parameters may be applicable to typography, the descriptive system is clearly less adequate in a typographic context where further work is needed before workable tools can...

  18. Ethernet image communication performance in a multimodal PACS network

    International Nuclear Information System (INIS)

    Lou, S.L.; Valentino, D.J.; Chan, K.K.; Huang, H.K.

    1989-01-01

    The authors have evaluated the performance of an Ethernet network in a multimodal picture archiving and communications system (PACS) environment. The study included measurements between Sun workstations and PC- AT computers running communication software at the TCP level. First they initiated image transfers between two workstations, a server and a client. Next, they successively added clients to transfer images to the server and they measured degradation in network performance. Finally, they initiated image transfers between pairs of workstations and again measured performance degradation. The results of the authors' experiments indicate that Ethernet is suitable for image communication only in limited network situations. They discuss how to maximize network performance given these constraints

  19. SNOW IN THE RUSSIAN LANGUAGE PICTURE OF THE WORLD

    Directory of Open Access Journals (Sweden)

    Jelena Kazimianec

    2013-10-01

    Full Text Available This article carries out a semantic and pragmatic description of the Russian word снег “snow,” considering its synonymic and word-formation relations, establishing a family of words, and defining the semantic oppositions in which the word “snow” and its separate word usages appear. The author pays particular attention to the pragmatic connotations of this word, placing them against a background of the different foreign language connotations of appropriating words. The article further investigates the group of the words designating the weather phenomena that typically accompany snowfall: метель “a snowstorm,” вьюга “a snowstorm, a blizzard,” буран “a severe snowstorm,” and пурга “a snowstorm, a blizzard,” defining their semantic range and features of how they function in speech. On the basis of an analysis of the facts provided in dictionaries and poetic discourses, the author comes to a conclusion about the existence of a separate semantic group of words with this meaning that proves the special importance of this weather phenomenon for Russians. The analysis also provides a way to determine that, unlike in other languages, the concept of “snow” in the Russian picture of the world is considered as an active figure: the word combination снег идет “it is snowing” is associated with positive concepts about happiness, the novelty of life, satisfaction with Russian aesthetic concepts about beauty, etc. The author proves that words and concepts united by the component “snow” possess a certain romantic nuance in which, it may be claimed, the unique character of Russian culture consists.

  20. Semantic category interference in overt picture naming: Sharpening current density localization by PCA

    NARCIS (Netherlands)

    Maess, B.; Friederici, A.D.; Damian, M.F.; Meyer, A.S.; Levelt, W.J.M.

    2002-01-01

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures

  1. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants

    OpenAIRE

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Mu?oz-Samons, Daniel; Ochoa, Susana; S?nchez-Laforga, Ana Mar?a; Br?bion, Gildas

    2017-01-01

    BACKGROUND: Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. METHODS: A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, ...

  2. The integration of audio-tactile information is modulated by multimodal social interaction with physical contact in infancy.

    Science.gov (United States)

    Tanaka, Yukari; Kanakogi, Yasuhiro; Kawasaki, Masahiro; Myowa, Masako

    2018-04-01

    Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio - tactile (A-T) information. By using electroencephalogram (EEG) and event-related potentials (ERPs), the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal 'A-T' condition), and not being tickled (unimodal 'A' condition). Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants' brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. The integration of audio−tactile information is modulated by multimodal social interaction with physical contact in infancy

    Directory of Open Access Journals (Sweden)

    Yukari Tanaka

    2018-04-01

    Full Text Available Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio − tactile (A-T information. By using electroencephalogram (EEG and event-related potentials (ERPs, the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal ‘A-T’ condition, and not being tickled (unimodal ‘A’ condition. Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants’ brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy. Keywords: Electroencephalogram (EEG, Infants, Multisensory integration, Touch interaction

  4. Learning multimodal dictionaries.

    Science.gov (United States)

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  5. Bruner's Three Forms of Representation Revisited: Action, Pictures and Words for Effective Computer Instruction.

    Science.gov (United States)

    Presno, Caroline

    1997-01-01

    Discusses computer instruction in light of Bruner's theory of three forms of representation (action, icons, and symbols). Examines how studies regarding Paivio's dual-coding theory and studies focusing on procedural knowledge support Bruner's theory. Provides specific examples for instruction in three categories: demonstrations, pictures and…

  6. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    Science.gov (United States)

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  7. An Investigation on the Cognitive Effects of Emoji Usage in Text

    DEFF Research Database (Denmark)

    Ousterhout, Thomas Kenneth

    Face-to-face communication is multimodal involving at least the auditory (speech) and the visual (gestures such as head movements, facial expressions and hand gestures) modalities. While multimodal signals are produced naturally in face-to-face communication, they are not so easily provided...... of a medical EEG equipment with a cheap commercial EEG device. Simple words and emojis produce semantic priming despite being different channels within the same modality and seem to corroborate with how we behave in face-to-face communication. Merging words with emojis also produced semantic congruity effects...... in written computer-mediated communication, and especially in instant messaging. The visual nonverbal cues are not available and there is a great potential for miscommunication. The growing use of emojis, pictures or short videos of facial expressions and symbols of various types, are a means to replace non...

  8. Slowing in reading and picture naming: the effects of aging and developmental dyslexia.

    Science.gov (United States)

    De Luca, Maria; Marinelli, Chiara Valeria; Spinelli, Donatella; Zoccolotti, Pierluigi

    2017-10-01

    We examined the slowing in vocal reaction times shown by dyslexic (compared to control) children with that of older (compared to younger) adults using an approach focusing on the detection of global, non-task-specific components. To address this aim, data were analyzed with reference to the difference engine (DEM) and rate and amount (RAM) models. In Experiment 1, typically developing children, children with dyslexia (both attending sixth grade), younger adults and older adults read words and non-words and named pictures. In Experiment 2, word and picture conditions were presented to dyslexic and control children attending eighth grade. In both experiments, dyslexic children were delayed in reading conditions, while they were unimpaired in naming pictures (a finding which indicates spared access to the phonological lexicon). The reading difficulty was well accounted for by a single multiplicative factor while only the residual effect of length (but not frequency and lexicality) was present after controlling for over-additivity using a linear mixed effects model with random slopes on critical variables. Older adults were slower than younger adults across reading and naming conditions. This deficit was well described by a single multiplicative factor. Thus, while slowing of information processing is limited to orthographic stimuli in dyslexic children, it cuts across verbal tasks in older adults. Overall, speed differences in groups such as dyslexic children and older adults can be effectively described with reference to deficits in domains encompassing a variety of experimental conditions rather than deficits in single specific task/conditions. The DEM and RAM prove effective in teasing out global vs. specific components of performance.

  9. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    2010-01-01

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal' communication, which should not be confused with the term ‘multimedia'. While multimedia...... on their teaching and learning situations. The choices they make involve e-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very...

  10. Preliminary adaptation into Portuguese of a standardised picture set for the use in research and neuropsychological assessment

    Directory of Open Access Journals (Sweden)

    POMPÉIA SABINE

    1998-01-01

    Full Text Available Pictorial stimuli and words have been widely used to evaluate mnemonic processes in clinical settings, neuropsychological investigations, as well as in studies on the mechanisms underlying the phenomena of memory. However, there seem to be few studies of standardisation of pictures for research in this field. The present paper aimed at adapting the use of a set of pictures standardised for English speaking subjects for Portuguese speakers. Name agreement of 150 pictures was assessed in 100 high-school students. Ninety pictures were found to present the same name for over 90 subjects. Results yield data that may help create more controlled tests for the study of memory for pictorial stimuli in Brazil.

  11. A Picture Is Worth a Thousand Words: Using Visual Images To Improve Comprehension for Middle School Struggling Readers.

    Science.gov (United States)

    Hibbing, Anne Nielsen; Rankin-Erickson, Joan L.

    2003-01-01

    Discusses teacher and student drawings in the classroom, illustrations in texts, picture books, and movies as external image-based tools that support reading comprehension. Presents a summary of points practitioners will want to consider when using sketches, illustrations, picture books, and movies with reluctant and low-ability middle school…

  12. Reading Multimodal Texts for Learning – a Model for Cultivating Multimodal Literacy

    Directory of Open Access Journals (Sweden)

    Kristina Danielsson

    2016-08-01

    Full Text Available The re-conceptualisation of texts over the last 20 years, as well as the development of a multimodal understanding of communication and representation of knowledge, has profound consequences for the reading and understanding of multimodal texts, not least in educational contexts. However, if teachers and students are given tools to “unwrap” multimodal texts, they can develop a deeper understanding of texts, information structures, and the textual organisation of knowledge. This article presents a model for working with multimodal texts in education with the intention to highlight mutual multimodal text analysis in relation to the subject content. Examples are taken from a Singaporean science textbook as well as a Chilean science textbook, in order to demonstrate that the framework is versatile and applicable across different cultural contexts. The model takes into account the following aspects of texts: the general structure, how different semiotic resources operate, the ways in which different resources are combined (including coherence, the use of figurative language, and explicit/implicit values. Since learning operates on different dimensions – such as social and affective dimensions besides the cognitive ones – our inclusion of figurative language and values as components for textual analysis is a contribution to multimodal text analysis for learning.

  13. What factors predict individual subjects' re-learning of words during anomia treatment?

    Directory of Open Access Journals (Sweden)

    William Hayward

    2014-04-01

    Full Text Available A growing number of studies are addressing methodological approaches to treating anomia in persons with aphasia. What is missing from these studies are validated procedures for determining which words have the greatest potential for recovery. The current study evaluates the usefulness of several word-specific variables and one subject-specific measure in predicting success in re-learning problematic words. Methods: Two participants, YPR and ODH, presented with fluent aphasia and marked anomia. YPR’s Aphasia Quotient on the Western Aphasia Battery was 58.8; ODH’s AQ was 79.5. Stimuli were 96 pictures chosen individually for each participant from among those that they named incorrectly on multiple baselines. Subsequently, participants were presented with each picture and asked to indicate whether they could name it covertly, or “in their head.” Each subject completed a biweekly anomia treatment for these pictures. We performed separate statistical analyses for each subject. Dependent variables included whether each word was learned during treatment (Acquisition and the number of sessions required to learn each word (#Sessions. We used logistic regression models to evaluate the association of (self-reported covert naming success with Acquisition, and linear regression models to assess the relationship between (self-reported covert naming success and #Sessions. Starting with the predictors of covert naming accuracy, number of syllables (#syllables, number of phonemes (#phonemes, and frequency, we used backwards elimination methods to select the final regression models. Results: By the end of 25 treatment sessions, YPR had learned 90.2% (37/41 of the covertly correct words but only 70.4% (38/54 of the covertly incorrect words. In the unadjusted analysis, covert naming was significantly associated with Acquisition, OR=3.89, 95% CI: (1.19, 12.74, p=0.025. The result remained significant after adjustment for #phonemes (the only other predictor

  14. Say it like you mean it: mothers' use of prosody to convey word meaning.

    Science.gov (United States)

    Herold, Debora S; Nygaard, Lynne C; Namy, Laura L

    2012-09-01

    Prosody plays a variety of roles in infants' communicative development, aiding in attention modulation, speech segmentation, and syntax acquisition. This study investigates the extent to which parents also spontaneously modulate prosodic aspects of infant directed speech in ways that distinguish semantic aspects of language. Fourteen mothers of two-year-old children read a picture book to their children in which they labeled pictures using dimensional adjectives (e.g., big, small, hot, cold). Recordings of the mothers' input to their children were analyzed acoustically and antonyms within each dimension were compared. Mothers modulated aspects of their prosody including amplitude and duration of target words and sentences to distinguish dimensional adjectives. Mothers appear to recruit prosody in the service of word learning.

  15. Critical Analysis of Multimodal Discourse

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This is an encyclopaedia article which defines the fields of critical discourse analysis and multimodality studies, argues that within critical discourse analysis more attention should be paid to multimodality, and within multimodality to critical analysis, and ends reviewing a few examples of re...

  16. Neural correlates of visualizations of concrete and abstract words in preschool children: A developmental embodied approach

    Directory of Open Access Journals (Sweden)

    Amedeo eD'angiulli

    2015-06-01

    Full Text Available The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization, followed by a four-picture array (a target plus three distractors (part 2: matching visualization. Children were to select the picture matching the word they heard in part 1. Event-Related Potentials (ERPs locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e. < 300 ms was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e. 300-699 ms and late (i.e. 700-1000 ms ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a post-anterior pathway sequence: occipital, parietal and temporal areas; conversely, matching visualization involved left-hemispheric activity following an ant-posterior pathway sequence: frontal, temporal, parietal and occipital areas. These results suggest that, similarly for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying

  17. The role of pictures in improving health communication: a review of research on attention, comprehension, recall, and adherence.

    Science.gov (United States)

    Houts, Peter S; Doak, Cecilia C; Doak, Leonard G; Loscalzo, Matthew J

    2006-05-01

    To assess the effects of pictures on health communications. Peer reviewed studies in health education, psychology, education, and marketing journals were reviewed. There was no limit placed on the time periods searched. Pictures closely linked to written or spoken text can, when compared to text alone, markedly increase attention to and recall of health education information. Pictures can also improve comprehension when they show relationships among ideas or when they show spatial relationships. Pictures can change adherence to health instructions, but emotional response to pictures affects whether they increase or decrease target behaviors. All patients can benefit, but patients with low literacy skills are especially likely to benefit. Patients with very low literacy skills can be helped by spoken directions plus pictures to take home as reminders or by pictures plus very simply worded captions. Educators should: (1) ask "how can I use pictures to support key points?", (2) minimize distracting details in pictures, (3) use simple language in conjunction with pictures, (4) closely link pictures to text and/or captions, (5) include people from the intended audience in designing pictures, (6) have health professionals plan the pictures, not artists, and (7) evaluate pictures' effects by comparing response to materials with and without pictures.

  18. Multimodal label-free microscopy

    Directory of Open Access Journals (Sweden)

    Nicolas Pavillon

    2014-09-01

    Full Text Available This paper reviews the different multimodal applications based on a large extent of label-free imaging modalities, ranging from linear to nonlinear optics, while also including spectroscopic measurements. We put specific emphasis on multimodal measurements going across the usual boundaries between imaging modalities, whereas most multimodal platforms combine techniques based on similar light interactions or similar hardware implementations. In this review, we limit the scope to focus on applications for biology such as live cells or tissues, since by their nature of being alive or fragile, we are often not free to take liberties with the image acquisition times and are forced to gather the maximum amount of information possible at one time. For such samples, imaging by a given label-free method usually presents a challenge in obtaining sufficient optical signal or is limited in terms of the types of observable targets. Multimodal imaging is then particularly attractive for these samples in order to maximize the amount of measured information. While multimodal imaging is always useful in the sense of acquiring additional information from additional modes, at times it is possible to attain information that could not be discovered using any single mode alone, which is the essence of the progress that is possible using a multimodal approach.

  19. Deafness and Immediate Memory for Pictures: Dissociations between "Inner Speech" and the "Inner Ear"?

    Science.gov (United States)

    Campbell, Ruth; Wright, Helen

    1990-01-01

    Examined deaf children for immediate memory of pictures of objects in two experiments. Deaf children did not use rhyme as a recall cue, but deaf children and age-matched children who could hear were both sensitive to name word length in recall. Implications of findings are discussed. (BC)

  20. Assessing language skills in adult key word signers with intellectual disabilities: Insights from sign linguistics.

    Science.gov (United States)

    Grove, Nicola; Woll, Bencie

    2017-03-01

    Manual signing is one of the most widely used approaches to support the communication and language skills of children and adults who have intellectual or developmental disabilities, and problems with communication in spoken language. A recent series of papers reporting findings from this population raises critical issues for professionals in the assessment of multimodal language skills of key word signers. Approaches to assessment will differ depending on whether key word signing (KWS) is viewed as discrete from, or related to, natural sign languages. Two available assessments from these different perspectives are compared. Procedures appropriate to the assessment of sign language production are recommended as a valuable addition to the clinician's toolkit. Sign and speech need to be viewed as multimodal, complementary communicative endeavours, rather than as polarities. Whilst narrative has been shown to be a fruitful context for eliciting language samples, assessments for adult users should be designed to suit the strengths, needs and values of adult signers with intellectual disabilities, using materials that are compatible with their life course stage rather than those designed for young children. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Neural Correlates of Task-Irrelevant First and Second Language Emotion Words — Evidence from the Face-Word Stroop Task

    Directory of Open Access Journals (Sweden)

    Lin Fan

    2016-11-01

    Full Text Available Emotionally valenced words have thus far not been empirically examined in a bilingual population with the emotional face-word Stroop paradigm. Chinese-English bilinguals were asked to identify the facial expressions of emotion with their first (L1 or second (L2 language task-irrelevant emotion words superimposed on the face pictures. We attempted to examine how the emotional content of words modulates behavioral performance and cerebral functioning in the bilinguals’ two languages. The results indicated that there were significant congruency effects for both L1 and L2 emotion words, and that identifiable differences in the magnitude of Stroop effect between the two languages were also observed, suggesting L1 is more capable of activating the emotional response to word stimuli. For event-related potentials (ERPs data, an N350-550 effect was observed only in L1 task with greater negativity for incongruent than congruent trials. The size of N350-550 effect differed across languages, whereas no identifiable language distinction was observed in the effect of conflict slow potential (conflict SP. Finally, more pronounced negative amplitude at 230-330 ms was observed in L1 than in L2, but only for incongruent trials. This negativity, likened to an orthographic decoding N250, may reflect the extent of attention to emotion word processing at word-form level, while N350-550 reflects a complicated set of processes in the conflict processing. Overall, the face-word congruency effect has reflected identifiable language distinction at 230-330 and 350-550 ms, which provides supporting evidence for the theoretical proposals assuming attenuated emotionality of L2 processing.

  2. Reading Personalized Books with Preschool Children Enhances Their Word Acquisition

    Science.gov (United States)

    Kucirkova, Natalia; Messer, David; Sheehy, Kieron

    2014-01-01

    This study examines whether books that contain personalized content are better facilitators of young children's word acquisition than books which are not personalized for a child. In a repeated-measures experimental design, 18 children (mean age 3;10) were read a picture book which contained both personalized and non-personalized sections, with…

  3. A neural signature of phonological access: distinguishing the effects of word frequency from familiarity and length in overt picture naming.

    Science.gov (United States)

    Graves, William W; Grabowski, Thomas J; Mehta, Sonya; Gordon, Jean K

    2007-04-01

    Cognitive models of word production correlate the word frequency effect (i.e., the fact that words which appear with less frequency take longer to produce) with an increased processing cost to activate the whole-word (lexical) phonological representation. We performed functional magnetic resonance imaging (fMRI) while subjects produced overt naming responses to photographs of animals and manipulable objects that had high name agreement but were of varying frequency, with the purpose of identifying neural structures participating specifically in activating whole-word phonological representations, as opposed to activating lexical semantic representations or articulatory-motor routines. Blood oxygen level-dependent responses were analyzed using a parametric approach based on the frequency with which each word produced appears in the language. Parallel analyses were performed for concept familiarity and word length, which provided indices of semantic and articulatory loads. These analyses permitted us to identify regions related to word frequency alone, and therefore, likely to be related specifically to activation of phonological word forms. We hypothesized that the increased processing cost of producing lower-frequency words would correlate with activation of the left posterior inferotemporal (IT) cortex, the left posterior superior temporal gyrus (pSTG), and the left inferior frontal gyrus (IFG). Scan-time response latencies demonstrated the expected word frequency effect. Analysis of the fMRI data revealed that activity in the pSTG was modulated by frequency but not word length or concept familiarity. In contrast, parts of IT and IFG demonstrated conjoint frequency and familiarity effects, and parts of both primary motor regions demonstrated conjoint effects of frequency and word length. The results are consistent with a model of word production in which lexical-semantic and lexical-phonological information are accessed by overlapping neural systems within

  4. Influence of syllable structure on L2 auditory word learning.

    Science.gov (United States)

    Hamada, Megumi; Goya, Hideki

    2015-04-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.

  5. Word, nonword and visual paired associate learning in Dutch dyslexic children

    NARCIS (Netherlands)

    Messbauer, V.C.S.; de Jong, P.F.

    2003-01-01

    Verbal and non-verbal learning were investigated in 21 8-11-year-old dyslexic children and chronological-age controls, and in 21 7-9-year-old reading-age controls. Tasks involved the paired associate learning of words, nonwords, or symbols with pictures. Both learning and retention of associations

  6. Multimodality imaging techniques.

    Science.gov (United States)

    Martí-Bonmatí, Luis; Sopena, Ramón; Bartumeus, Paula; Sopena, Pablo

    2010-01-01

    In multimodality imaging, the need to combine morphofunctional information can be approached by either acquiring images at different times (asynchronous), and fused them through digital image manipulation techniques or simultaneously acquiring images (synchronous) and merging them automatically. The asynchronous post-processing solution presents various constraints, mainly conditioned by the different positioning of the patient in the two scans acquired at different times in separated machines. The best solution to achieve consistency in time and space is obtained by the synchronous image acquisition. There are many multimodal technologies in molecular imaging. In this review we will focus on those multimodality image techniques more commonly used in the field of diagnostic imaging (SPECT-CT, PET-CT) and new developments (as PET-MR). The technological innovations and development of new tracers and smart probes are the main key points that will condition multimodality image and diagnostic imaging professionals' future. Although SPECT-CT and PET-CT are standard in most clinical scenarios, MR imaging has some advantages, providing excellent soft-tissue contrast and multidimensional functional, structural and morphological information. The next frontier is to develop efficient detectors and electronics systems capable of detecting two modality signals at the same time. Not only PET-MR but also MR-US or optic-PET will be introduced in clinical scenarios. Even more, MR diffusion-weighted, pharmacokinetic imaging, spectroscopy or functional BOLD imaging will merge with PET tracers to further increase molecular imaging as a relevant medical discipline. Multimodality imaging techniques will play a leading role in relevant clinical applications. The development of new diagnostic imaging research areas, mainly in the field of oncology, cardiology and neuropsychiatry, will impact the way medicine is performed today. Both clinical and experimental multimodality studies, in

  7. Evidence for a Limited-Cascading Account of Written Word Naming

    Science.gov (United States)

    Bonin, Patrick; Roux, Sebastien; Barry, Christopher; Canell, Laura

    2012-01-01

    We address the issue of how information flows within the written word production system by examining written object-naming latencies. We report 4 experiments in which we manipulate variables assumed to have their primary impact at the level of object recognition (e.g., quality of visual presentation of pictured objects), at the level of semantic…

  8. Multimodality Registration without a Dedicated Multimodality Scanner

    Directory of Open Access Journals (Sweden)

    Bradley J. Beattie

    2007-03-01

    Full Text Available Multimodality scanners that allow the acquisition of both functional and structural image sets on a single system have recently become available for animal research use. Although the resultant registered functional/structural image sets can greatly enhance the interpretability of the functional data, the cost of multimodality systems can be prohibitive, and they are often limited to two modalities, which generally do not include magnetic resonance imaging. Using a thin plastic wrap to immobilize and fix a mouse or other small animal atop a removable bed, we are able to calculate registrations between all combinations of four different small animal imaging scanners (positron emission tomography, single-photon emission computed tomography, magnetic resonance, and computed tomography [CT] at our disposal, effectively equivalent to a quadruple-modality scanner. A comparison of serially acquired CT images, with intervening acquisitions on other scanners, demonstrates the ability of the proposed procedures to maintain the rigidity of an anesthetized mouse during transport between scanners. Movement of the bony structures of the mouse was estimated to be 0.62 mm. Soft tissue movement was predominantly the result of the filling (or emptying of the urinary bladder and thus largely constrained to this region. Phantom studies estimate the registration errors for all registration types to be less than 0.5 mm. Functional images using tracers targeted to known structures verify the accuracy of the functional to structural registrations. The procedures are easy to perform and produce robust and accurate results that rival those of dedicated multimodality scanners, but with more flexible registration combinations and while avoiding the expense and redundancy of multimodality systems.

  9. A MULTIMODAL DISCOURSE ANALYSIS OF SELECTED ADVERTISEMENT OF MALARIA DRUGS

    Directory of Open Access Journals (Sweden)

    Ayodeji Olowu

    2015-06-01

    Full Text Available This study identified and analyzed the visual and linguistic components associated with the selected advertisement of malaria drugs. This was with a view to describing the essential communication devices the advertisers of such drugs have employed. Data for the study were drawn from both primary and secondary sources. The primary source for the study comprised 4 purposively selected posters, stickers and drugs literature advertisement on malaria. Analysis of the data followed the framework of Kress and Leeuwen’s Multimodal Discourse Analysis. The results showed that such visual resources as colour, pictures, symbols and icons, gaze and posture enhance the semantic quality of the advertisement. In the whole, the study emphasizes the vitality of visual and linguistic elements as important communication devices in advertising.

  10. Auditory phonological priming in children and adults during word repetition

    Science.gov (United States)

    Cleary, Miranda; Schwartz, Richard G.

    2004-05-01

    Short-term auditory phonological priming effects involve changes in the speed with which words are processed by a listener as a function of recent exposure to other similar-sounding words. Activation of phonological/lexical representations appears to persist beyond the immediate offset of a word, influencing subsequent processing. Priming effects are commonly cited as demonstrating concurrent activation of word/phonological candidates during word identification. Phonological priming is controversial, the direction of effects (facilitating versus slowing) varying with the prime-target relationship. In adults, it has repeatedly been demonstrated, however, that hearing a prime word that rhymes with the following target word (ISI=50 ms) decreases the time necessary to initiate repetition of the target, relative to when the prime and target have no phonemic overlap. Activation of phonological representations in children has not typically been studied using this paradigm, auditory-word + picture-naming tasks being used instead. The present study employed an auditory phonological priming paradigm being developed for use with normal-hearing and hearing-impaired children. Initial results from normal-hearing adults replicate previous reports of faster naming times for targets following a rhyming prime word than for targets following a prime having no phonemes in common. Results from normal-hearing children will also be reported. [Work supported by NIH-NIDCD T32DC000039.

  11. Releasing the constraints on aphasia therapy: the positive impact of gesture and multimodality treatments.

    Science.gov (United States)

    Rose, Miranda L

    2013-05-01

    There is a 40-year history of interest in the use of arm and hand gestures in treatments that target the reduction of aphasic linguistic impairment and compensatory methods of communication (Rose, 2006). Arguments for constraining aphasia treatment to the verbal modality have arisen from proponents of constraint-induced aphasia therapy (Pulvermüller et al., 2001). Confusion exists concerning the role of nonverbal treatments in treating people with aphasia. The central argument of this paper is that given the state of the empirical evidence and the strong theoretical accounts of modality interactions in human communication, gesture-based and multimodality aphasia treatments are at least as legitimate an option as constraint-based aphasia treatment. Theoretical accounts of modality interactions in human communication and the gesture production abilities of individuals with aphasia that are harnessed in treatments are reviewed. The negative effects on word retrieval of restricting gesture production are also reviewed, and an overview of the neurological architecture subserving language processing is provided as rationale for multimodality treatments. The evidence for constrained and unconstrained treatments is critically reviewed. Together, these data suggest that constraint treatments and multimodality treatments are equally efficacious, and there is limited support for constraining client responses to the spoken modality.

  12. Inattentional blindness for ignored words: comparison of explicit and implicit memory tasks.

    Science.gov (United States)

    Butler, Beverly C; Klein, Raymond

    2009-09-01

    Inattentional blindness is described as the failure to perceive a supra-threshold stimulus when attention is directed away from that stimulus. Based on performance on an explicit recognition memory test and concurrent functional imaging data Rees, Russell, Frith, and Driver [Rees, G., Russell, C., Frith, C. D., & Driver, J. (1999). Inattentional blindness versus inattentional amnesia for fixated but ignored words. Science, 286, 2504-2507] reported inattentional blindness for word stimuli that were fixated but ignored. The present study examined both explicit and implicit memory for fixated but ignored words using a selective-attention task in which overlapping picture/word stimuli were presented at fixation. No explicit awareness of the unattended words was apparent on a recognition memory test. Analysis of an implicit memory task, however, indicated that unattended words were perceived at a perceptual level. Thus, the selective-attention task did not result in perfect filtering as suggested by Rees et al. While there was no evidence of conscious perception, subjects were not blind to the implicit perceptual properties of fixated but ignored words.

  13. Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming.

    Science.gov (United States)

    Shitova, Natalia; Roelofs, Ardi; Schriefers, Herbert; Bastiaansen, Marcel; Schoffelen, Jan-Mathijs

    2017-07-01

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture-word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an electroencephalography (EEG) study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Multimodal fluorescence imaging spectroscopy

    NARCIS (Netherlands)

    Stopel, Martijn H W; Blum, Christian; Subramaniam, Vinod; Engelborghs, Yves; Visser, Anthonie J.W.G.

    2014-01-01

    Multimodal fluorescence imaging is a versatile method that has a wide application range from biological studies to materials science. Typical observables in multimodal fluorescence imaging are intensity, lifetime, excitation, and emission spectra which are recorded at chosen locations at the sample.

  15. Error analysis of supersonic air-to-air ejector schlieren pictures

    Directory of Open Access Journals (Sweden)

    Kolář J.

    2013-04-01

    Full Text Available The scope of this article is focused on general analysis of errors and uncertainties possibly arising from CFD-to-schlieren pictures matching. Analysis is based on classic analytical equations. These are firstly evaluated with the presumption of constant density gradient along the ray course. In other words, the deflection of light-ray caused by density gradient is negligible in compare to the cross size of constant gradient area. It is the aim of this work to determine, whether this presumption is applicable in case of supersonic air-to-air ejector. The colour and black and white schlieren pictures are carried out and compared to CFD results. Simulations had covered various eddy viscosities. Computed pressure gradients are transformed into deflection angles and further to ray displacement. Resulting computed light- ray deflection is matched to experimental results

  16. Inorganic Nanoparticles for Multimodal Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Magdalena Swierczewska

    2011-01-01

    Full Text Available Multimodal molecular imaging can offer a synergistic improvement of diagnostic ability over a single imaging modality. Recent development of hybrid imaging systems has profoundly impacted the pool of available multimodal imaging probes. In particular, much interest has been focused on biocompatible, inorganic nanoparticle-based multimodal probes. Inorganic nanoparticles offer exceptional advantages to the field of multimodal imaging owing to their unique characteristics, such as nanometer dimensions, tunable imaging properties, and multifunctionality. Nanoparticles mainly based on iron oxide, quantum dots, gold, and silica have been applied to various imaging modalities to characterize and image specific biologic processes on a molecular level. A combination of nanoparticles and other materials such as biomolecules, polymers, and radiometals continue to increase functionality for in vivo multimodal imaging and therapeutic agents. In this review, we discuss the unique concepts, characteristics, and applications of the various multimodal imaging probes based on inorganic nanoparticles.

  17. Attention demands of spoken word planning: A review

    Directory of Open Access Journals (Sweden)

    Ardi eRoelofs

    2011-11-01

    Full Text Available Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot proceed without paying some form of attention. Here, we review evidence that word planning requires some but not full attention. The evidence comes from chronometric studies of word planning in picture naming and word reading under divided attention conditions. It is generally assumed that the central attention demands of a process are indexed by the extent that the process delays the performance of a concurrent unrelated task. The studies measured the speed and accuracy of linguistic and nonlinguistic responding as well as eye gaze durations reflecting the allocation of attention. First, empirical evidence indicates that in several task situations, processes up to and including phonological encoding in word planning delay, or are delayed by, the performance of concurrent unrelated nonlinguistic tasks. These findings suggest that word planning requires central attention. Second, empirical evidence indicates that conflicts in word planning may be resolved while concurrently performing an unrelated nonlinguistic task, making a task decision, or making a go/no-go decision. These findings suggest that word planning does not require full central attention. We outline a computationally implemented theory of attention and word planning, and describe at various points the outcomes of computer simulations that demonstrate the utility of the theory in accounting for the key findings. Finally, we indicate how attention deficits may contribute to impaired language performance, such as in individuals with specific language impairment.

  18. Compound words prompt arbitrary semantic associations in conceptual memory

    Directory of Open Access Journals (Sweden)

    Bastien eBoutonnet

    2014-03-01

    Full Text Available Linguistic relativity theory has received empirical support in domains such as colour perception and object categorisation. It is unknown however, whether relations between words idiosyncratic to language impact nonverbal representations and conceptualisations. For instance, would one consider the concepts of horse and sea as related were it not for the existence of the compound seahorse? Here, we investigated such arbitrary conceptual relationships using a non-linguistic picture relatedness task in participants undergoing event-related brain potential recordings. Picture pairs arbitrarily related because of a compound and presented in the compound order elicited N400 amplitudes similar to unrelated pairs. Surprisingly, however, pictures presented in the reverse order (as in the sequence horse – sea reduced N400 amplitudes significantly, demonstrating the existence of a link in memory between these two concepts otherwise unrelated. These results break new ground in the domain of linguistic relativity by revealing predicted semantic associations driven by lexical relations intrinsic to language.

  19. Multimodality in organization studies

    DEFF Research Database (Denmark)

    Van Leeuwen, Theo

    2017-01-01

    This afterword reviews the chapters in this volume and reflects on the synergies between organization and management studies and multimodality studies that emerge from the volume. These include the combination of strong sociological theorizing and detailed multimodal analysis, a focus on material...

  20. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    Science.gov (United States)

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  1. Comparing Multilingual Children with SLI to Their Bilectal Peers: Evidence from Object and Action Picture Naming

    Science.gov (United States)

    Kambanaros, Maria; Grohmann, Kleanthes K.; Michaelides, Michalis; Theodorou, Elena

    2013-01-01

    Against the background of the increasing number of multilingual children with atypical language development around the world, this study reports research results on grammatical word class processing involving children with specific language impairment (SLI). The study investigates lexical retrieval of verbs (through picture-naming actions) and…

  2. Pictures as cues or as support to verbal cues at encoding and execution of prospective memories in individuals with intellectual disability

    OpenAIRE

    Levén, Anna; Lyxell, Björn; Andersson, Jan; Danielsson, Henrik

    2013-01-01

    This study focused on prospective memory in persons with intellectual disability and age-matched controls. Persons with intellectual disability have limited prospective memory function. We investigated prospective memory with words and pictures as cues at encoding and retrieval. Prospective and episodic memory was estimated from Prospective Memory Game performance. Pictures at retrieval were important for prospective memory in particular in the intellectual disability group. Prospective memor...

  3. Practical multimodal care for cancer cachexia.

    Science.gov (United States)

    Maddocks, Matthew; Hopkinson, Jane; Conibear, John; Reeves, Annie; Shaw, Clare; Fearon, Ken C H

    2016-12-01

    Cancer cachexia is common and reduces function, treatment tolerability and quality of life. Given its multifaceted pathophysiology a multimodal approach to cachexia management is advocated for, but can be difficult to realise in practice. We use a case-based approach to highlight practical approaches to the multimodal management of cachexia for patients across the cancer trajectory. Four cases with lung cancer spanning surgical resection, radical chemoradiotherapy, palliative chemotherapy and no anticancer treatment are presented. We propose multimodal care approaches that incorporate nutritional support, exercise, and anti-inflammatory agents, on a background of personalized oncology care and family-centred education. Collectively, the cases reveal that multimodal care is part of everyone's remit, often focuses on supported self-management, and demands buy-in from the patient and their family. Once operationalized, multimodal care approaches can be tested pragmatically, including alongside emerging pharmacological cachexia treatments. We demonstrate that multimodal care for cancer cachexia can be achieved using simple treatments and without a dedicated team of specialists. The sharing of advice between health professionals can help build collective confidence and expertise, moving towards a position in which every team member feels they can contribute towards multimodal care.

  4. Impact of the picture exchange communication system: effects on communication and collateral effects on maladaptive behaviors.

    Science.gov (United States)

    Ganz, Jennifer B; Parker, Richard; Benson, Joanne

    2009-12-01

    Many children with autism require intensive instruction in the use of augmentative or alternative communication systems, such as the Picture Exchange Communication System (PECS). This study investigated the use of PECS with three young boys with autism to determine the impact of PECS training on use of pictures for requesting, use of intelligible words, and maladaptive behaviors. A multiple baseline-probe design with a staggered start was implemented. Results indicated that all of the participants quickly learned to make requests using pictures and that two used intelligible speech following PECS instruction; maladaptive behaviors were variable throughout baseline and intervention phases. Although all of the participants improved in at least one dependent variable, there remain questions regarding who is best suited for PECS and similar interventions.

  5. Words, Words, Words: English, Vocabulary.

    Science.gov (United States)

    Lamb, Barbara

    The Quinmester course on words gives the student the opportunity to increase his proficiency by investigating word origins, word histories, morphology, and phonology. The course includes the following: dictionary skills and familiarity with the "Oxford,""Webster's Third," and "American Heritage" dictionaries; word…

  6. BILINGUAL MULTIMODAL SYSTEM FOR TEXT-TO-AUDIOVISUAL SPEECH AND SIGN LANGUAGE SYNTHESIS

    Directory of Open Access Journals (Sweden)

    A. A. Karpov

    2014-09-01

    Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.

  7. Is Retrieval-Induced Forgetting behind the Bilingual Disadvantage in Word Production?

    Science.gov (United States)

    Runnqvist, Elin; Costa, Albert

    2012-01-01

    Levy, Mc Veigh, Marful and Andreson (2007) found that naming pictures in L2 impaired subsequent recall of the L1 translation words. This was interpreted as evidence for a domain-general inhibitory mechanism (RIF) underlying first language attrition. Because this result is at odds with some previous findings and theoretical assumptions, we wanted…

  8. Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared. and they manually responded to arrows presented away from (Experiment

  9. The Influence of Prosodic Stress Patterns and Semantic Depth on Novel Word Learning in Typically Developing Children.

    Science.gov (United States)

    Gladfelter, Allison; Goffman, Lisa

    2013-01-01

    The goal of this study was to investigate the effects of prosodic stress patterns and semantic depth on word learning. Twelve preschool-aged children with typically developing speech and language skills participated in a word learning task. Novel words with either a trochaic or iambic prosodic pattern were embedded in one of two learning conditions, either in children's stories (semantically rich) or picture matching games (semantically sparse). Three main analyses were used to measure word learning: comprehension and production probes, phonetic accuracy, and speech motor stability. Results revealed that prosodic frequency and density influence the learnability of novel words, or that there are prosodic neighborhood density effects. The impact of semantic depth on word learning was minimal and likely depends on the amount of experience with the novel words.

  10. On national flags and language tags: Effects of flag-language congruency in bilingual word recognition.

    Science.gov (United States)

    Grainger, Jonathan; Declerck, Mathieu; Marzouki, Yousri

    2017-07-01

    French-English bilinguals performed a generalized lexical decision experiment with mixed lists of French and English words and pseudo-words. In Experiment 1, each word/pseudo-word was superimposed on the picture of the French or UK flag, and flag-word congruency was manipulated. The flag was not informative with respect to either the lexical decision response or the language of the word. Nevertheless, lexical decisions to word stimuli were faster following the congruent flag compared with the incongruent flag, but only for French (L1) words. Experiment 2 replicated this flag-language congruency effect in a priming paradigm, where the word and pseudo-word targets followed the brief presentation of the flag prime, and this time effects were seen in both languages. We take these findings as evidence for a mechanism that automatically processes linguistic and non-linguistic information concerning the presence or not of a given language. Language membership information can then modulate lexical processing, in line with the architecture of the BIA model, but not the BIA+ model. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. El Infant Facial Expressions of Emotions from Looking at Pictures. Versión peruana

    Directory of Open Access Journals (Sweden)

    Pierina Traverso

    2012-12-01

    Full Text Available The Infant Facial Expressions Of Emotions From Looking at Pictures. Peruvian versionThe Peruvian version of the Infant Facial Expression of Emotions from Looking at Pictures (IFEEL, instrument that assessed the interpretation of emotions from children’s faces pictures is presented. The original version from Emde, Osofsky & Butterfield (1993 was developed in the United States and involves 30 stimuli. The Peruvian version involves 25 pictures of children with prototypic facial features of the majority of Peruvian population. A sample of 363 men and women of middle and low socio-economic status between 19 and 45 years old was recruited to develop the Peruvian version. From the results, a lexicon was created with the words that were used by the participants to designate the 14 groups of emotion that were obtained. The majority of these groups had an adequate reliability for temporal stability. Finally, it was found that the socio-economic status (SES is a variable that generates significant differences in the way how persons interpret the emotions. Therefore, referential values of differentiated interpretation were created from this variable.

  12. The Progress of Students Reading Comprehension through Wordless Picture Books

    Directory of Open Access Journals (Sweden)

    Romaida Lubis

    2018-02-01

    Full Text Available Wordless picture book is an unique book that could help the young learner to get their literacy. The content of the wordless picture book must be communicated through the visual of the illustration. This research discusses a case study of how a kid of six years old produce his narrative through wordless picture book. The kid allowed to see and say on the page and then write the words that he has mentioned. Practicing to read repeatedly which increase fluency will improve his reading comprehension and written expression. This research was conducted to make better understand about the sense - making process that happen when a child works with the wordless picture book. Most sentences or texts were made based on the references and experience from daily life either explicitly or implicitly. In reading wordless book, readers faced the variety of visual signs. These sign systems help reader form a type of framework that show their interpretation of the text and helps them build construction of the story. The researcher wanted to make the reader understand better about the strategies that the child use to make sense of wordless text. The reason of this study is to help how a six year old nonreader would give interpretation to visual cues in wordless picture books. Transacting with the visual text in the books helped the child to make sense of the stories. The data were analyzed based on the principles of qualitative content analysis that involve a systematic review of the data, coding, category construction and analysis. The result of this research is the wordless picture books give opportunity to the children to create the story on their own and to bring in their own understanding of the world to the text.

  13. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  14. Compound words prompt arbitrary semantic associations in conceptual memory

    OpenAIRE

    Boutonnet, Bastien; McClain, Rhonda; Thierry, Guillaume

    2014-01-01

    Linguistic relativity theory has received empirical support in domains such as colour perception and object categorisation. It is unknown however, whether relations between words idiosyncratic to language impact nonverbal representations and conceptualisations. For instance, would one consider the concepts of horse and sea as related were it not for the existence of the compound seahorse? Here, we investigated such arbitrary conceptual relationships using a non-linguistic picture relatedness ...

  15. Language production in a shared task: Cumulative semantic interference from self- and other-produced context words

    OpenAIRE

    Hoedemaker, R.; Ernst, J.; Meyer, A.; Belke, E.

    2017-01-01

    This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this paradigm, naming latencies have been found to increase for successive presentations of exemplars from the same category, a phenomenon known as Cumulative Semantic Interference (CSI). As expected, th...

  16. Multimodal imaging Gd-nanoparticles functionalized with Pittsburgh compound B or a nanobody for amyloid plaques targeting.

    Science.gov (United States)

    Pansieri, Jonathan; Plissonneau, Marie; Stransky-Heilkron, Nathalie; Dumoulin, Mireille; Heinrich-Balard, Laurence; Rivory, Pascaline; Morfin, Jean-François; Toth, Eva; Saraiva, Maria Joao; Allémann, Eric; Tillement, Olivier; Forge, Vincent; Lux, François; Marquette, Christel

    2017-07-01

    Gadolinium-based nanoparticles were functionalized with either the Pittsburgh compound B or a nanobody (B10AP) in order to create multimodal tools for an early diagnosis of amyloidoses. The ability of the functionalized nanoparticles to target amyloid fibrils made of β-amyloid peptide, amylin or Val30Met-mutated transthyretin formed in vitro or from pathological tissues was investigated by a range of spectroscopic and biophysics techniques including fluorescence microscopy. Nanoparticles functionalized by both probes efficiently interacted with the three types of amyloid fibrils, with K D values in 10 micromolar and 10 nanomolar range for, respectively, Pittsburgh compound B and B10AP nanoparticles. Moreover, they allowed the detection of amyloid deposits on pathological tissues. Such functionalized nanoparticles could represent promising flexible and multimodal imaging tools for the early diagnostic of amyloid diseases, in other words, Alzheimer's disease, Type 2 diabetes mellitus and the familial amyloidotic polyneuropathy.

  17. Picture languages formal models for picture recognition

    CERN Document Server

    Rosenfeld, Azriel

    1979-01-01

    Computer Science and Applied Mathematics: Picture Languages: Formal Models for Picture Recognition treats pictorial pattern recognition from the formal standpoint of automata theory. This book emphasizes the capabilities and relative efficiencies of two types of automata-array automata and cellular array automata, with respect to various array recognition tasks. The array automata are simple processors that perform sequences of operations on arrays, while the cellular array automata are arrays of processors that operate on pictures in a highly parallel fashion, one processor per picture element. This compilation also reviews a collection of results on two-dimensional sequential and parallel array acceptors. Some of the analogous one-dimensional results and array grammars and their relation to acceptors are likewise covered in this text. This publication is suitable for researchers, professionals, and specialists interested in pattern recognition and automata theory.

  18. Hemispheric resource limitations in comprehending ambiguous pictures.

    Science.gov (United States)

    White, H; Minor, S W

    1990-03-01

    Ambiguous pictures (Roschach inkblots) were lateralized for 100 msec vs. 200 msec to the right and left hemispheres (RH and LH) of 32 normal right-handed males who determined which of two previously presented words (an accurate or inaccurate one) better described the inkblot. Over the first 32 trials, subjects receiving each stimulus exposure duration were less accurate when the hemisphere receiving the stimulus also controlled the hand used to register a keypress response (RH-left hand and LH-right hand trials) than when hemispheric resources were shared, i.e., when one hemisphere controlled stimulus processing and the other controlled response programming. These differences were eliminated when the 32 trials were repeated.

  19. Speech, "Inner Speech," and the Development of Short-Term Memory: Effects of Picture-Labeling on Recall.

    Science.gov (United States)

    Hitch, Graham J.; And Others

    1991-01-01

    Reports on experiments to determine effects of overt speech on children's use of inner speech in short-term memory. Word length and phonemic similarity had greater effects on older children and when pictures were labeled at presentation. Suggests that speaking or listening to speech activates an internal articulatory loop. (Author/GH)

  20. Lexical and semantic representations of L2 cognate and noncognate words acquisition in children : evidence from two learning methods

    OpenAIRE

    Comesaña, Montserrat; Soares, Ana Paula; Sánchez-Casas, Rosa; Lima, Cátia

    2012-01-01

    How bilinguals represent words in two languages and which mechanisms are responsible for second language acquisition are important questions in the bilingual and vocabulary acquisition literature. This study aims to analyze the effect of two learning methods (picture-based vs. word-based method) and two types of words (cognates and noncognates) in early stages of children’s L2 acquisition. Forty-eight native speakers of European Portuguese, all sixth graders (mean age= 10.87 years; SD= 0....

  1. Expanding the area of classical philology: International words

    Directory of Open Access Journals (Sweden)

    Vibeke Roggen

    2014-11-01

    Full Text Available The classical languages, Greek and Latin, have a special kind of afterlife, namely through their explosive expansion into other languages, from antiquity until today. The aim of the present paper is to give a broad survey of this field of study – enough to show that there is a lot to find. As examples are chosen English, Spanish and Norwegian – three Indo-European languages, all of them with rich material for our purpose. In the national philologies, the treat­ment of the Greek and Latin elements are often not given special attention, but are studied alongside other aspects of the language in question. A cooperation with classical philology would be an advantage. Moreover, only classical philology can give the full picture, seen from the point of view of Greek and Latin, and explain why and how these languages have lended so many words and word elements to so many vernacular languages. Another aspect of the field, which I call ‘international words’, is the enormous potential that these words have, if disseminated in a good way to the general population. If taught systematically, the learner will be able to see the connections between words, learn new words faster, and develop a deeper understanding of the vocabularies in – for example – English, Spanish and Norwegian.

  2. The Role of Book Features in Young Children's Transfer of Information from Picture Books to Real-World Contexts.

    Science.gov (United States)

    Strouse, Gabrielle A; Nyhout, Angela; Ganea, Patricia A

    2018-01-01

    Picture books are an important source of new language, concepts, and lessons for young children. A large body of research has documented the nature of parent-child interactions during shared book reading. A new body of research has begun to investigate the features of picture books that support children's learning and transfer of that information to the real world. In this paper, we discuss how children's symbolic development, analogical reasoning, and reasoning about fantasy may constrain their ability to take away content information from picture books. We then review the nascent body of findings that has focused on the impact of picture book features on children's learning and transfer of words and letters, science concepts, problem solutions, and morals from picture books. In each domain of learning we discuss how children's development may interact with book features to impact their learning. We conclude that children's ability to learn and transfer content from picture books can be disrupted by some book features and research should directly examine the interaction between children's developing abilities and book characteristics on children's learning.

  3. The Role of Book Features in Young Children's Transfer of Information from Picture Books to Real-World Contexts

    Science.gov (United States)

    Strouse, Gabrielle A.; Nyhout, Angela; Ganea, Patricia A.

    2018-01-01

    Picture books are an important source of new language, concepts, and lessons for young children. A large body of research has documented the nature of parent-child interactions during shared book reading. A new body of research has begun to investigate the features of picture books that support children's learning and transfer of that information to the real world. In this paper, we discuss how children's symbolic development, analogical reasoning, and reasoning about fantasy may constrain their ability to take away content information from picture books. We then review the nascent body of findings that has focused on the impact of picture book features on children's learning and transfer of words and letters, science concepts, problem solutions, and morals from picture books. In each domain of learning we discuss how children's development may interact with book features to impact their learning. We conclude that children's ability to learn and transfer content from picture books can be disrupted by some book features and research should directly examine the interaction between children's developing abilities and book characteristics on children's learning. PMID:29467690

  4. The Role of Book Features in Young Children's Transfer of Information from Picture Books to Real-World Contexts

    Directory of Open Access Journals (Sweden)

    Gabrielle A. Strouse

    2018-02-01

    Full Text Available Picture books are an important source of new language, concepts, and lessons for young children. A large body of research has documented the nature of parent-child interactions during shared book reading. A new body of research has begun to investigate the features of picture books that support children's learning and transfer of that information to the real world. In this paper, we discuss how children's symbolic development, analogical reasoning, and reasoning about fantasy may constrain their ability to take away content information from picture books. We then review the nascent body of findings that has focused on the impact of picture book features on children's learning and transfer of words and letters, science concepts, problem solutions, and morals from picture books. In each domain of learning we discuss how children's development may interact with book features to impact their learning. We conclude that children's ability to learn and transfer content from picture books can be disrupted by some book features and research should directly examine the interaction between children's developing abilities and book characteristics on children's learning.

  5. Eye movements characteristics of Chinese dyslexic children in picture searching.

    Science.gov (United States)

    Huang, Xu; Jing, Jin; Zou, Xiao-Bing; Wang, Meng-Long; Li, Xiu-Hong; Lin, Ai-Hua

    2008-09-05

    Reading Chinese, a kind of ideogram, relies more on visual cognition. The visuospatial cognitive deficit of Chinese dyslexia is an interesting topic that has received much attention. The purpose of current research was to explore the visuopatial cognitive characteristics of Chinese dyslexic children by studying their eye movements via a picture searching test. According to the diagnostic criteria defined by ICD-10, twenty-eight dyslexic children (mean age (10.12 +/- 1.42) years) were enrolled from the Clinic of Children Behavioral Disorder in the third affiliated hospital of Sun Yat-sen University. And 28 normally reading children (mean age (10.06 +/- 1.29) years), 1:1 matched by age, sex, grade and family condition were chosen from an elementary school in Guangzhou as a control group. Four groups of pictures (cock, accident, canyon, meditate) from Picture Vocabulary Test were chosen as eye movement experiment targets. All the subjects carried out the picture searching task and their eye movement data were recorded by an Eyelink II High-Speed Eye Tracker. The duration time, average fixation duration, average saccade amplitude, fixation counts and saccade counts were compared between the two groups of children. The dyslexic children had longer total fixation duration and average fixation duration (F = 7.711, P < 0.01; F = 4.520, P < 0.05), more fixation counts and saccade counts (F = 7.498, P < 0.01; F = 11.040, P < 0.01), and a smaller average saccade amplitude (F = 29.743, P < 0.01) compared with controls. But their performance in the picture vocabulary test was the same as those of the control group. The eye movement indexes were affected by the difficulty of the pictures and words, all eye movement indexes, except saccade amplitude, had a significant difference within groups (P < 0.05). Chinese dyslexic children have abnormal eye movements in picture searching, applying slow fixations, more fixations and small and frequent saccades. Their abnormal eye movement

  6. Lexical Access in Persian Normal Speakers: Picture Naming, Verbal Fluency and Spontaneous Speech

    Directory of Open Access Journals (Sweden)

    Zahra Sadat Ghoreishi

    2014-06-01

    Full Text Available Objectives: Lexical access is the process by which the basic conceptual, syntactical and morpho-phonological information of words are activated. Most studies of lexical access have focused on picture naming. There is hardly any previous research on other parameters of lexical access such as verbal fluency and analysis of connected speech in Persian normal participants. This study investigates the lexical access performance in normal speakers in different issues such as age, sex and education. Methods: The performance of 120 adult Persian speakers in three tasks including picture naming, verbal fluency and connected speech, was examined using "Persian Lexical Access Assessment Package”. The performance of participants between two gender groups (male/female, three education groups (below 5 years, above 12 years, between 5 and 12 years and three age groups (18-35 years, 36-55 years, 56-75 years were compared. Results: According to findings, picture naming increased with increasing education and decreased with increasing age. The performance of participants in phonological and semantic verbal fluency showed improvement with age and education. No significant difference was seen between males and females in verbal fluency task. In the analysis of connected speech there were no significant differences between different age and education groups and just mean length of utterance in males was significantly higher than females. Discussion: The findings could be a primitive scale for comparison between normal subjects and patients in lexical access tasks, furthermore it could be a horizon for planning of treatment goals in patients with word finding problem according to age, gender and education.

  7. Pictures in Pictures: Art History and Art Museums in Children's Picture Books

    Science.gov (United States)

    Yohlin, Elizabeth

    2012-01-01

    Children's picture books that recreate, parody, or fictionalize famous artworks and introduce the art museum experience, a genre to which I will refer as "children's art books," have become increasingly popular over the past decade. This essay explores the pedagogical implications of this trend through the family program "Picture Books and Picture…

  8. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    Directory of Open Access Journals (Sweden)

    Vitoria ePiai

    2013-12-01

    Full Text Available Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI; vocal colour naming while ignoring distractors (Stroop; and manual object discrimination while ignoring spatial position (Simon task. All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus. Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category relative to incongruent (categorically related and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the anterior cingulate cortex, a region that is likely implementing domain

  9. Cross-modal integration of lexical-semantic features during word processing: evidence from oscillatory dynamics during EEG.

    Directory of Open Access Journals (Sweden)

    Markus J van Ackeren

    Full Text Available In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE. Each pair of features described properties from either the same modality (e.g., silver, tiny  =  visual features or different modalities (e.g., silver, loud  =  visual, auditory. Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4-6 Hz in left anterior temporal lobe (ATL, a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.

  10. Multimodal Aspects of Corporate Social Responsibility Communication

    Directory of Open Access Journals (Sweden)

    Carmen Daniela Maier

    2014-12-01

    Full Text Available This article addresses how the multimodal persuasive strategies of corporate social responsibility communication can highlight a company’s commitment to gender empowerment and environmental protection while advertising simultaneously its products. Drawing on an interdisciplinary methodological framework related to CSR communication, multimodal discourse analysis and gender theory, the article proposes a multimodal analysis model through which it is possible to map and explain the multimodal persuasive strategies employed by Coca-Cola company in their community-related films. By examining the semiotic modes’ interconnectivity and functional differentiation, this analytical endeavour expands the existing research work as the usual textual focus is extended to a multimodal one.

  11. Multimodal sequence learning.

    Science.gov (United States)

    Kemény, Ferenc; Meier, Beat

    2016-02-01

    While sequence learning research models complex phenomena, previous studies have mostly focused on unimodal sequences. The goal of the current experiment is to put implicit sequence learning into a multimodal context: to test whether it can operate across different modalities. We used the Task Sequence Learning paradigm to test whether sequence learning varies across modalities, and whether participants are able to learn multimodal sequences. Our results show that implicit sequence learning is very similar regardless of the source modality. However, the presence of correlated task and response sequences was required for learning to take place. The experiment provides new evidence for implicit sequence learning of abstract conceptual representations. In general, the results suggest that correlated sequences are necessary for implicit sequence learning to occur. Moreover, they show that elements from different modalities can be automatically integrated into one unitary multimodal sequence. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Counter-stereotypical pictures as a strategy for overcoming spontaneous gender stereotypes

    Directory of Open Access Journals (Sweden)

    Eimear eFinnegan

    2015-08-01

    Full Text Available The present research investigated the use of counter-stereotypical pictures as a strategy for overcoming spontaneous gender stereotypes when certain social role nouns and professional terms are read. Across two experiments, participants completed a judgement task in which they were presented with word pairs comprised of a role noun with a stereotypical gender bias (e.g. beautician and a kinship term with definitional gender (e.g. brother. Their task was to quickly decide whether or not both terms could refer to one person. In each experiment they completed 2 blocks of such judgement trials separated by a training session in which they were presented with pictures of people working in gender counter-stereotypical (Experiment 1 or gender stereotypical roles (Experiment 2. To ensure participants were focused on the pictures, they were also required to answer 4 questions on each one relating to the character’s leisure activities, earnings, job satisfaction and personal life. Accuracy of judgements to stereotype incongruent pairings was found to improve significantly across blocks when participants were exposed to counter-stereotype images (9.87% as opposed to stereotypical images (0.12%, while response times decreased significantly across blocks in both studies. It is concluded that exposure to counter-stereotypical pictures is a valuable strategy for overcoming spontaneous gender stereotype biases in the short term.

  13. Analysis of words to development of augmentative and alternative communication boards for disabled student

    Directory of Open Access Journals (Sweden)

    Andréa Carla Paura

    2011-12-01

    Full Text Available Purpose: The aim of this study was to analyze the contribution of the words used in language assessment instruments and/or the vocabulary used in Brazil for the development of alternative communication boards. Methods: word lists from the selected assessment instruments were analyzed through a protocol designed for this purpose. The frequency of occurrence of each word was verified considering four word lists from the instruments and the frequency of occurrence of these words according to the classification proposed by Comunication Pictu Symbols system - PCS. Results: Results showed that the frequency words occurred only once was of 67.88% and the frequency of occurrence of concrete and abstract nouns in the instruments was 60.04%. The instrument that presented words with more than one occurrence was the Vocabulary Test-PPVT Peabody Picture. Conclusions: The use of tool that are already used and standardized may contribute to the process of evaluation, selection and deployment of augmentative and alternative communication resources for children and youth with disabilities.

  14. The Impact of Presenting Semantically Related Clusters of New Words on Iranian Intermediate EFL learners' Vocabulary Acquisition

    Directory of Open Access Journals (Sweden)

    Saiede Shiri

    2017-09-01

    Full Text Available Teaching vocabulary in semantically related sets use as a common practice by EFL teachers. The present study tests the effectiveness of this techniques by comparing it with semantically unrelated clusters as the other technique on Iranian intermediate EFL learners. In the study three intact classes of participants studying at Isfahan were presented with a set of unrelated words through “ 504 Absolutely Essential words”, a set of related words through “The Oxford Picture Dictionary “, and the control group were presented some new words through six texts from “Reading Through Interaction”. Comparing of the results indicated that, while both techniques help the learners to acquire new sets of the words, presenting words in semantically unrelated sets seems to be more effective.

  15. Word form Encoding in Chinese Word Naming and Word Typing

    Science.gov (United States)

    Chen, Jenn-Yeu; Li, Cheng-Yi

    2011-01-01

    The process of word form encoding was investigated in primed word naming and word typing with Chinese monosyllabic words. The target words shared or did not share the onset consonants with the prime words. The stimulus onset asynchrony (SOA) was 100 ms or 300 ms. Typing required the participants to enter the phonetic letters of the target word,…

  16. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    Science.gov (United States)

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  17. Multimodal Processes Rescheduling

    DEFF Research Database (Denmark)

    Bocewicz, Grzegorz; Banaszak, Zbigniew A.; Nielsen, Peter

    2013-01-01

    Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe-cuted in the......Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe...

  18. Iconicity influences how effectively minimally verbal children with autism and ability-matched typically developing children use pictures as symbols in a search task.

    Science.gov (United States)

    Hartley, Calum; Allen, Melissa L

    2015-07-01

    Previous word learning studies suggest that children with autism spectrum disorder may have difficulty understanding pictorial symbols. Here we investigate the ability of children with autism spectrum disorder and language-matched typically developing children to contextualize symbolic information communicated by pictures in a search task that did not involve word learning. Out of the participant's view, a small toy was concealed underneath one of four unique occluders that were individuated by familiar nameable objects or unfamiliar unnamable objects. Children were shown a picture of the hiding location and then searched for the toy. Over three sessions, children completed trials with color photographs, black-and-white line drawings, and abstract color pictures. The results reveal zero group differences; neither children with autism spectrum disorder nor typically developing children were influenced by occluder familiarity, and both groups' errorless retrieval rates were above-chance with all three picture types. However, both groups made significantly more errorless retrievals in the most-iconic photograph trials, and performance was universally predicted by receptive language. Therefore, our findings indicate that children with autism spectrum disorder and young typically developing children can contextualize pictures and use them to adaptively guide their behavior in real time and space. However, this ability is significantly influenced by receptive language development and pictorial iconicity. © The Author(s) 2014.

  19. The emotion potential of words and passages in reading Harry Potter--an fMRI study.

    Science.gov (United States)

    Hsu, Chun-Ting; Jacobs, Arthur M; Citron, Francesca M M; Conrad, Markus

    2015-03-01

    Previous studies suggested that the emotional connotation of single words automatically recruits attention. We investigated the potential of words to induce emotional engagement when reading texts. In an fMRI experiment, we presented 120 text passages from the Harry Potter book series. Results showed significant correlations between affective word (lexical) ratings and passage ratings. Furthermore, affective lexical ratings correlated with activity in regions associated with emotion, situation model building, multi-modal semantic integration, and Theory of Mind. We distinguished differential influences of affective lexical, inter-lexical, and supra-lexical variables: differential effects of lexical valence were significant in the left amygdala, while effects of arousal-span (the dynamic range of arousal across a passage) were significant in the left amygdala and insula. However, we found no differential effect of passage ratings in emotion-associated regions. Our results support the hypothesis that the emotion potential of short texts can be predicted by lexical and inter-lexical affective variables. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Multimodal Resources in Transnational Adoption

    DEFF Research Database (Denmark)

    Raudaskoski, Pirkko Liisa

    The paper discusses an empirical analysis which highlights the multimodal nature of identity construction. A documentary on transnational adoption provides real life incidents as research material. The incidents involve (or from them emerge) various kinds of multimodal resources and participants...

  1. Parent-child picture-book reading, mothers' mental state language and children's theory of mind.

    Science.gov (United States)

    Adrian, Juan E; Clemente, Rosa A; Villanueva, Lidon; Rieffe, Carolien

    2005-08-01

    This study focuses on parent-child book reading and its connection to the development of a theory of mind. First, parents were asked to report about frequency of parent-child storybook reading at home. Second, mothers were asked to read four picture-books to thirty-four children between 4;0 and 5;0. Both frequency of parent-child storybook reading at home, and mother's use of mental state terms in picture-books reading tasks were significantly associated with success on false belief tasks, after partialling out a number of potential mediators such as age of children, verbal IQ, paternal education, and words used by mothers in joint picture-book reading. Among the different mental state references (cognitive terms, desires, emotions and perceptions), it was found that the frequency and variety of cognitive terms, but also the frequency of emotional terms correlated positively with children's false belief performance. Relationships between mental state language and theory of mind are discussed.

  2. Multimodal Diversity of Postmodernist Fiction Text

    Directory of Open Access Journals (Sweden)

    U. I. Tykha

    2016-12-01

    Full Text Available The article is devoted to the analysis of structural and functional manifestations of multimodal diversity in postmodernist fiction texts. Multimodality is defined as the coexistence of more than one semiotic mode within a certain context. Multimodal texts feature a diversity of semiotic modes in the communication and development of their narrative. Such experimental texts subvert conventional patterns by introducing various semiotic resources – verbal or non-verbal.

  3. Experiments in Multimodal Information Presentation

    NARCIS (Netherlands)

    van Hooijdonk, Charlotte; Bosma, W.E.; Krahmer, Emiel; Maes, Alfons; Theune, Mariet; van den Bosch, Antal; Bouma, Gosse

    In this chapter we describe three experiments investigating multimodal information presentation in the context of a medical QA system. In Experiment 1, we wanted to know how non-experts design (multimodal) answers to medical questions, distinguishing between what questions and how questions. In

  4. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  5. An object cue is more effective than a word in ERP-based detection of deception.

    Science.gov (United States)

    Cutmore, Tim R H; Djakovic, Tatjana; Kebbell, Mark R; Shum, David H K

    2009-03-01

    Recent studies of deception have used a form of the guilty knowledge test along with the oddball P300 event-related potential (ERP) to uncover hidden memories. These studies typically have used words as the cuing stimuli. In the present study, a mock crime was enacted by participants to prime their episodic memory and different memory cue types (Words, Pictures of Objects and Faces) were created to investigate their relative efficacy in identifying guilt. A peak-to peak (p-p) P300 response was computed for rare known non-guilty item (target), rare guilty knowledge item (probe) and frequently presented unknown items (irrelevant). Difference in this P300 measure between the probe and irrelevant was the key dependent variable. Object cues were found to be the most effective, particularly at the parietal site. A bootstrap procedure commonly used to detect deception in individual participants by comparing their probe and irrelevant P300 p-p showed the object cues to provide the best discrimination. Furthermore, using all three of the cue types together provided high detection accuracy (94%). These results confirm prior findings on the utility of ERPs for detecting deception. More importantly, they provide support for the hypothesis that direct cueing with a picture of the crime object may be more effective than using a word (consistent with the picture superiority effect reported in the literature). Finally, a face cue (e.g., crime victim) may also provide a useful probe for detection of guilty knowledge but this stimulus form needs to be chosen with due caution.

  6. Priming trait inferences through pictures and moving pictures: the impact of open and closed mindsets.

    Science.gov (United States)

    Fiedler, Klaus; Schenck, Wolfram; Watling, Marlin; Menges, Jochen I

    2005-02-01

    A newly developed paradigm for studying spontaneous trait inferences (STI) was applied in 3 experiments. The authors primed dyadic stimulus behaviors involving a subject (S) and an object (O) person through degraded pictures or movies. An encoding task called for the verification of either a graphical feature or a semantic interpretation, which either fit or did not fit the primed behavior. Next, participants had to identify a trait word that appeared gradually behind a mask and that either matched or did not match the primed behavior. STI effects, defined as shorter identification latencies for matching than nonmatching traits, were stronger for S than for O traits, after graphical rather than semantic encoding decisions and after encoding failures. These findings can be explained by assuming that trait inferences are facilitated by open versus closed mindsets supposed to result from distracting (graphical) encoding tasks or encoding failures (involving nonfitting interpretations).

  7. Multimodal Discourse Analysis of the Movie "Argo"

    Science.gov (United States)

    Bo, Xu

    2018-01-01

    Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…

  8. The influence of the picture superiority effect on performance in the word and picture form of the Free and Cued Selective Reminding Test

    OpenAIRE

    Thorley, Natasha

    2013-01-01

    Background: The Free and Cued Selective Reminding Test (FCSRT) is a delayed cued recall test that controls attention and cognitive processing to obtain a measure of episodic memory that is unconfounded by normal age-related changes in cognition. Performance in the FCSRT is sensitive to the early changes in episodic memory associated with Alzheimer’s disease (AD). There are two forms of the FCSRT: a ‘word’ form and a ‘picture’ form. This study aimed to examine whether the picture superiority e...

  9. The semantics of prosody: acoustic and perceptual evidence of prosodic correlates to word meaning.

    Science.gov (United States)

    Nygaard, Lynne C; Herold, Debora S; Namy, Laura L

    2009-01-01

    This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language. Copyright © 2009 Cognitive Science Society, Inc.

  10. Using Spanish-English Cognates in Children's Choices Picture Books to Develop Latino English Learners' Linguistic Knowledge

    Science.gov (United States)

    Hernández, Anita C.; Montelongo, José A.; Herter, Roberta J.

    2016-01-01

    Educators can take advantage of Latino English learners' linguistic backgrounds by teaching Spanish-English cognate vocabulary using the Children's Choices picture books. Cognates are words that have identical or nearly identical spellings and meanings in two languages because of their Latin and Greek origins. Students can learn to recognize…

  11. A Picture is Worth a Thousand Words: Examining learners’ illustrations to understand Attitudes towards Mathematics

    Directory of Open Access Journals (Sweden)

    Farhat Syyeda

    2015-04-01

    Full Text Available This article presents my experience of using pictures/images drawn by children as a form of data in research and discusses the merits and implications of employing this method. It comes from research of a mixed method exploratory case study to investigate the attitudes of 11 and 15 year old secondary school students (in the East Midlands towards Mathematics. The aim of this research was to gain an insight into the emotions, cognition, beliefs and behaviour of learners regarding Maths and the factors which influence their attitude. Besides using the tried and tested data collection tools such as focus groups and questionnaires, the children were asked to draw pictures illustrating their vision of Maths and its impact on their lives. The idea was to offer them an alternative medium of communication to exhibit their feelings and thoughts. Students used emoticons, numerals, figures, characters and mathematical symbols to show their favourable/unfavourable attitudes towards Maths and their understanding of the importance of Maths in future life. The results of visual data in this study conform to the findings of the other forms of data collected and show that boys and higher ability students have a more positive attitude towards Mathematics as compared to girls and low ability students.

  12. Multimodal exemplification: The expansion of meaning in electronic ...

    African Journals Online (AJOL)

    Functional Multimodal Discourse Analysis (SF-MDA) and argues for improving their exemplifica-tion multimodally. Multimodal devices, if well coordinated, can help optimize e-dictionary exam-ples in informativity, diversity, dynamicity and ...

  13. Age of acquisition effects in word recognition and production in first and second languages

    Directory of Open Access Journals (Sweden)

    Andrew W. Ellis

    2002-01-01

    Full Text Available Four experiments explored the age of acquisition effects in the first and second languages of dominant Spanish-English bilinguals. In Experiment 1 (picture naming task and Experiment 2 (lexical decision task, an age of acquisition effect was observed in a second language acquired after childhood as well as in the first language. The results suggest that age of acquisition effects reflect the order of word acquisition, which may in turn reflect the state of the lexical network when new words are learnt. The results do not support the idea that age of acquisition effects reflect differences between words learned during some critical period in childhood and words learned later in life. In Experiments 3 and 4, the age/order of second language acquisition affected lexical decision latencies regardless of the age at which translation equivalents were acquired in the first language, suggesting that the age of acquisition effect is linked to the acquisition of word forms rather than meanings.

  14. Bye-bye mummy - Word comprehension in 9-month-old infants.

    Science.gov (United States)

    Syrnyk, Corinne; Meints, Kerstin

    2017-06-01

    From the little research that exists on the onset of word learning in infants under the age of 1 year, the evidence suggests an idiosyncratic comprehensive vocabulary is developing. To further this field, we tested 49 nine-month-old infants by pre-assessing their vocabularies using a UK version of the MacArthur-Bates Communicative Developmental Inventory. Intermodal preferential looking (IPL) was then used to examine word comprehension including: (a) words parents reported as understood, (b) words infants are expected to understand according to age-related frequency data, and (c) words parents had reported infants not to understand. Assuming parents are good assessors of their infant's early word knowledge, we expected a naming effect with IPL in condition (a), but not condition (c). As language research uses standard samples of words, we expected a discernible naming effect in condition (b). Results show clear IPL evidence of word comprehension for those words that parents reported their infants to understand (condition a). This agreement between methods demonstrates the usefulness of parental communicative developmental inventory in conjunction with IPL to assess infant's individual word knowledge. No naming effects were found for condition (c) and the lack of naming effects in (b) shows that pre-established word lists may not give a sufficiently clear picture of infant's true vocabulary - an important insight for researchers and practitioners alike. Statement of contribution What is already known on this subject? Most word comprehension research is mainly based on older infants (12, 15, or 18 months of age to 2-3 years and older). Some evidence of word comprehension for common and novel nouns in 6- to 10-month-olds. Existing evidence uses either only specific word groups or nouns combined with specific training and/or repetition procedures. What does this study add? Nine-month-olds display word knowledge independent of context and without repetitions of words

  15. Motivational priming and processing interrupt: startle reflex modulation during shallow and deep processing of emotional words.

    Science.gov (United States)

    Herbert, Cornelia; Kissler, Johanna

    2010-05-01

    Valence-driven modulation of the startle reflex, that is larger eyeblinks during viewing of unpleasant pictures and inhibited blinks while viewing pleasant pictures, is well documented. The current study investigated, whether this motivational priming pattern also occurs during processing of unpleasant and pleasant words, and to what extent it is influenced by shallow vs. deep encoding of verbal stimuli. Emotional and neutral adjectives were presented for 5s, and the acoustically elicited startle eyeblink response was measured while subjects memorized the words by means of shallow or deep processing strategies. Results showed blink potentiation to unpleasant and blink inhibition to pleasant adjectives in subjects using shallow encoding strategies. In subjects using deep-encoding strategies, blinks were larger for pleasant than unpleasant or neutral adjectives. In line with this, free recall of pleasant words was also better in subjects who engaged in deep processing. The results suggest that motivational priming holds as long as processing is perceptual. However, during deep processing the startle reflex appears to represent a measure of "processing interrupt", facilitating blinks to those stimuli that are more deeply encoded. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Seeing is believing: the direct and contingent influence of pictures in health promotion advertising.

    Science.gov (United States)

    Chang, Chingching

    2013-01-01

    Because pictures, compared with words, are more effective in triggering vivid imagery, their effects should increase in situations in which they play a crucial role in facilitating imagery. This study accordingly explored the relative effects of information presented in pictorial formats and verbal formats in health promotion advertising. Symptoms presented in pictorial formats increased perceptions of the severity of a disease, whereas prevention options presented in pictorial formats enhanced efficacy in preventing the disease. This study also examined two contingent situations: when people were oriented toward visual processing, and when imagery could not be easily triggered without the help of pictures, such as when symptoms or prevention options were difficult or unpleasant to imagine. The findings of three studies supported the offered predictions.

  17. Consideration of vision and picture quality: psychological effects induced by picture sharpness

    Science.gov (United States)

    Kusaka, Hideo

    1989-08-01

    A psychological hierarchy model of human vision(1)(2) suggests that the visual signals are processed in a serial manner from lower to higher stages: that is "sensation" - "perception" - "emotion." For designing a future television system, it is important to find out what kinds of physical factors affect the "emotion" experienced by an observer in front of the display. This paper describes the psychological effects induced by the sharpness of the picture. The subjective picture quality was evaluated for the same pictures with five different levels of sharpness. The experiment was performed on two kinds of printed pictures: (A) a woman's face, and (B) a town corner. From these experiments, it was found that the amount of high-frequency peaking (physical value of the sharpness) which psychologically gives the best picture quality, differs between pictures (A) and (B). That is, the optimum picture sharpness differs depending on the picture content. From these results, we have concluded that the psychophysical sharpness of the picture is not only determined at the stage of "perception" (e.g., resolution or signal to noise ratio, which everyone can judge immediately), but also at the stage of "emotion" (e.g., sensation of reality or beauty).

  18. Posture affects how robots and infants map words to objects.

    Directory of Open Access Journals (Sweden)

    Anthony F Morse

    Full Text Available For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body - and its momentary posture - may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1-3, 6 & 7, or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9. A humanoid robot model (experiments 1-5 is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies' momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6-9. Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge -not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping - but through the body's momentary disposition in space.

  19. Cascaded Processing in Written Naming: Evidence from the Picture-Picture Interference Paradigm

    Science.gov (United States)

    Roux, Sebastien; Bonin, Patrick

    2012-01-01

    The issue of how information flows within the lexical system in written naming was investigated in five experiments. In Experiment 1, participants named target pictures that were accompanied by context pictures having phonologically and orthographically related or unrelated names (e.g., a picture of a "ball" superimposed on a picture of…

  20. Effects on Communicative Requesting and Speech Development of the Picture Exchange Communication System in Children with Characteristics of Autism

    Science.gov (United States)

    Ganz, Jennifer B.; Simpson, Richard L.

    2004-01-01

    Few studies on augmentative and alternative communication (AAC) systems have addressed the potential for such systems to impact word utterances in children with autism spectrum disorders (ASD). The Picture Exchange Communication System (PECS) is an AAC system designed specifically to minimize difficulties with communication skills experienced by…

  1. Serial and parallel processing in reading: investigating the effects of parafoveal orthographic information on nonisolated word recognition.

    Science.gov (United States)

    Dare, Natasha; Shillcock, Richard

    2013-01-01

    We present a novel lexical decision task and three boundary paradigm eye-tracking experiments that clarify the picture of parallel processing in word recognition in context. First, we show that lexical decision is facilitated by associated letter information to the left and right of the word, with no apparent hemispheric specificity. Second, we show that parafoveal preview of a repeat of word n at word n + 1 facilitates reading of word n relative to a control condition with an unrelated word at word n + 1. Third, using a version of the boundary paradigm that allowed for a regressive eye movement, we show no parafoveal "postview" effect on reading word n of repeating word n at word n - 1. Fourth, we repeat the second experiment but compare the effects of parafoveal previews consisting of a repeated word n with a transposed central bigram (e.g., caot for coat) and a substituted central bigram (e.g., ceit for coat), showing the latter to have a deleterious effect on processing word n, thereby demonstrating that the parafoveal preview effect is at least orthographic and not purely visual.

  2. The picture superiority effect in categorization: visual or semantic?

    Science.gov (United States)

    Job, R; Rumiati, R; Lotto, L

    1992-09-01

    Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.

  3. [Usefulness of the 10 pictures reminding test for memory assessment for the diagnosis of Alzheimer's disease, mild cognitive impairment and anxiety/depression].

    Science.gov (United States)

    Federico, D; Thomas-Anterion, C; Borg, C; Foyatier Michel, N; Dirson, S; Laurent, B

    2008-10-01

    Episodic memory is often considered to be essential in the neuropsychological examination of elderly people consulting in the memory clinics. Therefore, the performance of three different episodic memory tests were compared in Alzheimer's disease (AD), mild cognitive impairment (MCI) and anxiety/depression. Seventy-six patients with AD, 46 with MCI, and 36 with anxiety/depression performed three memory tests: (1) three-words immediate and delayed recall of the MMSE test; (2) 10-pictures reminding test; (3) 16-items free and cued reminding test. Patients with AD and MCI differed from the depressed/anxious participants on all subcomponents of the memory tests. Only the three-words immediate and delayed recall in the MMSE test as well as the immediate recall (encoding) of the free and cued reminding test (16-items) did not differ between AD and MCI. Significant correlations were also evidenced between the free and cued recall of the 10 pictures and the score of the 16-items for all patients. Scores of total and free recalls distinguished the three group of patients; also, a trend was observed for the free recall between the patients with AD and MCI. The three-words immediate and delayed recall of the MMSE test is linked with hippocampic dysfunction. Also, the present study suggests that the 10-pictures reminding test, is a simple and reliable test for investigating memory, in addition to other evaluation tests. Finally, further studies would be necessary to assess the sensitivity and specificity of the tests.

  4. Napping facilitates word learning in early lexical development.

    Science.gov (United States)

    Horváth, Klára; Myers, Kyle; Foster, Russell; Plunkett, Kim

    2015-10-01

    Little is known about the role that night-time sleep and daytime naps play in early cognitive development. Our aim was to investigate how napping affects word learning in 16-month-olds. Thirty-four typically developing infants were assigned randomly to nap and wake groups. After teaching two novel object-word pairs to infants, we tested their initial performance with an intermodal preferential looking task in which infants are expected to increase their target looking time compared to a distracter after hearing its auditory label. A second test session followed after approximately a 2-h delay. The delay contained sleep for the nap group or no sleep for the wake group. Looking behaviour was measured with an automatic eye-tracker. Vocabulary size was assessed using the Oxford Communicative Development Inventory. A significant interaction between group and session was found in preferential looking towards the target picture. The performance of the nap group increased after the nap, whereas that of the wake group did not change. The gain in performance correlated positively with the expressive vocabulary size in the nap group. These results indicate that daytime napping helps consolidate word learning in infancy. © 2015 European Sleep Research Society.

  5. Associations From Pictures.

    Science.gov (United States)

    Pettersson, Rune

    A picture can be interpreted in different ways by various persons. There is often a difference between a picture's denotation (literal meaning), connotation (associative meaning), and private associations. Two studies were conducted in order to observe the private associations that pictures awaken in people. One study deals with associations made…

  6. Words we do not say-Context effects on the phonological activation of lexical alternatives in speech production.

    Science.gov (United States)

    Jescheniak, Jörg D; Kurtz, Franziska; Schriefers, Herbert; Günther, Josefine; Klaus, Jana; Mädebach, Andreas

    2017-06-01

    There is compelling evidence that context strongly influences our choice of words (e.g., whether we refer to a particular animal with the basic-level name "bird" or the subordinate-level name "duck"). However, little is known about whether the context already affects the degree to which the alternative words are activated. In this study, we explored the effect of a preceding linguistic context on the phonological activation of alternative picture names. In Experiments 1 to 3, the context was established by a request produced by an imaginary interlocutor. These requests either constrained the naming response to the subordinate level on pragmatic grounds (e.g., "name the bird!") or not (e.g., "name the object!"). In Experiment 4, the context was established by the speaker's own previous naming response. Participants named the pictures with their subordinate-level names and the phonological activation of the basic-level names was assessed with distractor words phonologically related versus unrelated to that name (e.g., "birch" vs. "lamp"). In all experiments, we consistently found that distractor words phonologically related to the basic-level name interfered with the naming response more strongly than unrelated distractor words. Moreover, this effect was of comparable size for nonconstraining and constraining contexts indicating that the alternative name was phonologically activated and competed for selection, even when it was not an appropriate lexical option. Our results suggest that the speech production system is limited in its ability of flexibly adjusting and fine-tuning the lexical activation patterns of words (among which to choose from) as a function of pragmatic constraints. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Questions, pictures, answers: introducing pictures in question-answering systems

    NARCIS (Netherlands)

    Theune, Mariet; van Schooten, B.W.; op den Akker, Hendrikus J.A.; Bosma, W.E.; Hofs, D.H.W.; Nijholt, Antinus; Krahmer, E.J.; van Hooijdonk, C.M.J.; Marsi, E.C.; Ruiz Miyarez, L.; Munoz Alvarado, A.; Alvarez Moreno, C.

    We present the Dutch IMIX research programme on multimodal interaction, speech and language technology. We discuss our contributions to this programme in the form of two research projects, IMOGEN and VIDIAM, and the technical integration of the various modules developed by IMIX subprojects to build

  8. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    González González, G.R.; Nijholt, Antinus

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply

  9. Reference Resolution in Multi-modal Interaction: Position paper

    NARCIS (Netherlands)

    Fernando, T.; Nijholt, Antinus

    2002-01-01

    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can

  10. Customer Protest: Exit, Voice or Negative Word of Mouth

    Directory of Open Access Journals (Sweden)

    Solvang, B. K.

    2008-01-01

    Full Text Available Of the three forms of protest the propensity of word of mouth (WOM seems to be the most common, and the most exclusive form of protest seems to be exit. The propensity for voice lies in between. The costs linked to voice influence the propensity for WOM. The customers seem to do an evaluation between the three forms of protest, yet the rational picture of the customers should be moderated.Leaders should improve their treatment of the customers making complaints. The more they can treat customer complaints in an orderly and nice way the less informal negative word of mouth activity they will experience and they will reduce the exit propensity and lead the customers to the complain organisation. They should also ensure that their customers feel they get equal treatment by the staff.

  11. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  12. Individual differences in language ability are related to variation in word recognition, not speech perception: Evidence from eye-movements

    Science.gov (United States)

    McMurray, Bob; Munson, Cheyenne; Tomblin, J. Bruce

    2013-01-01

    Purpose This study examined speech perception deficits associated with individual differences in language ability contrasting auditory, phonological or lexical accounts by asking if lexical competition is differentially sensitive to fine-grained acoustic variation. Methods 74 adolescents with a range of language abilities (including 35 impaired) participated in an experiment based on McMurray, Tanenhaus and Aslin (2002). Participants heard tokens from six 9-step Voice Onset Time (VOT) continua spanning two words (beach/peach, beak/peak, etc), while viewing a screen containing pictures of those words and two unrelated objects. Participants selected the referent while eye-movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Results Eye-movements were sensitive to within-category VOT differences: as VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Conclusions Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences phonological categorization or auditory abilities. PMID:24687026

  13. Individual differences in language ability are related to variation in word recognition, not speech perception: evidence from eye movements.

    Science.gov (United States)

    McMurray, Bob; Munson, Cheyenne; Tomblin, J Bruce

    2014-08-01

    The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Adolescents with a range of language abilities (N = 74, including 35 impaired) participated in an experiment based on McMurray, Tanenhaus, and Aslin (2002). Participants heard tokens from six 9-step voice onset time (VOT) continua spanning 2 words (beach/peach, beak/peak, etc.) while viewing a screen containing pictures of those words and 2 unrelated objects. Participants selected the referent while eye movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Eye movements were sensitive to within-category VOT differences: As VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences in phonological categorization or auditory abilities.

  14. FN400 and LPC memory effects for concrete and abstract words.

    Science.gov (United States)

    Stróżak, Paweł; Bird, Christopher W; Corby, Krystin; Frishkoff, Gwen; Curran, Tim

    2016-11-01

    According to dual-process models, recognition memory depends on two neurocognitive mechanisms: familiarity, which has been linked to the frontal N400 (FN400) effect in studies using ERPs, and recollection, which is reflected by changes in the late positive complex (LPC). Recently, there has been some debate over the relationship between FN400 familiarity effects and N400 semantic effects. According to one view, these effects are one and the same. Proponents of this view have suggested that the frontal distribution of the FN400 could be due to stimulus concreteness: recognition memory experiments commonly use highly imageable or concrete words (or pictures), which elicit semantic ERPs with a frontal distribution. In the present study, we tested this claim using a recognition memory paradigm in which subjects memorized concrete and abstract nouns; half of the words changed font color between study and test. FN400 and LPC old/new effects were observed for abstract as well as concrete words, and were stronger over right hemisphere electrodes for concrete words. However, there was no difference in anteriority of the FN400 effect for the two word types. These findings challenge the notion that the frontal distribution of the FN400 old/new effect is fully explained by stimulus concreteness. © 2016 Society for Psychophysiological Research.

  15. FN400 and LPC memory effects for concrete and abstract words

    Science.gov (United States)

    Stróżak, Paweł; Bird, Christopher W.; Corby, Krystin; Frishkoff, Gwen; Curran, Tim

    2016-01-01

    According to dual-process models, recognition memory depends on two neurocognitive mechanisms: familiarity, which has been linked to the "frontal N400" (FN400) effect in studies using event-related potentials (ERPs), and recollection, which is reflected by changes in the late positive complex (LPC). Recently, there has been some debate over the relationship between FN400 familiarity effects and N400 semantic effects. According to one view, these effects are one and the same. Proponents of this view have suggested that the frontal distribution of the FN400 could be due to stimulus concreteness: recognition memory experiments commonly use highly imageable or concrete words (or pictures), which elicit semantic ERPs with a frontal distribution. In the present study we tested this claim using a recognition memory paradigm in which subjects memorized concrete and abstract nouns; half of the words changed font color between study and test. FN400 and LPC old/new effects were observed for abstract, as well as concrete words, and were stronger over right hemisphere electrodes for concrete words. However, there was no difference in anteriority of the FN400 effect for the two word types. These findings challenge the notion that the frontal distribution of the FN400 old/new effect is fully explained by stimulus concreteness. PMID:27463978

  16. Design and practice for a picture archiving and communication system based structured report module

    International Nuclear Information System (INIS)

    Tian Junzhang; Jiang Guihua; Zheng Liyin; Ou Jingchai; Wu Pingyang; Hong Wensong; Jin Lin; Huang Dajiang; Zhang Xuelin

    2004-01-01

    Objective: To design and explore structured report module based on PACS, and to make diagnostic reports with pictures and words in application of computer implementing synchronous transmission of reports and pictures. Methods: 1000 Mb trunk net was adopted in PACS and 100 Mb was exchanged on tabletop. Structured report was designed with six function modules including basic item area, image sign area, diagnostic impression area, advice area, signature area, and picture area by application of programming language such as Delphi 6.0 and VC ++ 6.0. DICOM. Medical images or waveform were inserted directly in the diagnosis report through citing DICOM composite object. Basic function library was designed and constructed in whole system environment. Results: The structured report module based on PACS could structure image diagnosis report in inerrability structure in term of compute. The time and period of reports were shortened and utilization of report original datum were improved. Conclusion: The structured report module was in favor of promotion to combine with clinic teaching and scientific research. The quality and efficiency of image diagnosis work were raised by structured report module

  17. Multimode-singlemode-multimode fiber sensor for alcohol sensing application

    Science.gov (United States)

    Rofi'ah, Iftihatur; Hatta, A. M.; Sekartedjo, Sekartedjo

    2016-11-01

    Alcohol is volatile and flammable liquid which is soluble substances both on polar and non polar substances that has been used in some industrial sectors. Alcohol detection method now widely used one of them is the optical fiber sensor. In this paper used fiber optic sensor based on Multimode-Single-mode-Multimode (MSM) to detect alcohol solution at a concentration range of 0-3%. The working principle of sensor utilizes the modal interference between the core modes and the cladding modes, thus make the sensor sensitive to environmental changes. The result showed that characteristic of the sensor not affect the length of the single-mode fiber (SMF). We obtain that the sensor with a length of 5 mm of single-mode can sensing the alcohol with a sensitivity of 0.107 dB/v%.

  18. The (in)dependence of articulation and lexical planning during isolated word production.

    Science.gov (United States)

    Buz, Esteban; Jaeger, T Florian

    The number of phonological neighbors to a word (PND) can affect its lexical planning and pronunciation. Similar parallel effects on planning and articulation have been observed for other lexical variables, such as a word's contextual predictability. Such parallelism is frequently taken to indicate that effects on articulation are mediated by effects on the time course of lexical planning. We test this mediation assumption for PND and find it unsupported. In a picture naming experiment, we measure speech onset latencies (planning), word durations, and vowel dispersion (articulation). We find that PND predicts both latencies and durations. Further, latencies predict durations. However, the effects of PND and latency on duration are independent: parallel effects do not imply mediation. We discuss the consequences for accounts of lexical planning, articulation, and the link between them. In particular, our results suggest that ease of planning does not explain effects of PND on articulation.

  19. Multimodal pain management after arthroscopic surgery

    DEFF Research Database (Denmark)

    Rasmussen, Sten

    Multimodal Pain Management after Arthroscopic Surgery By Sten Rasmussen, M.D. The thesis is based on four randomized controlled trials. The main hypothesis was that multimodal pain treatment provides faster recovery after arthroscopic surgery. NSAID was tested against placebo after knee arthroscopy...

  20. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Multimodality, creativity and children's meaning-making: Drawings ...

    African Journals Online (AJOL)

    Multimodality, creativity and children's meaning-making: Drawings, writings, imaginings. ... Framed by social semiotic theories of communication, multimodal ... to create imaginary worlds and express meanings according to their interests.

  2. How Many Words Is a Picture Worth? Integrating Visual Literacy in Language Learning with Photographs

    Science.gov (United States)

    Baker, Lottie

    2015-01-01

    Cognitive research has shown that the human brain processes images quicker than it processes words, and images are more likely than text to remain in long-term memory. With the expansion of technology that allows people from all walks of life to create and share photographs with a few clicks, the world seems to value visual media more than ever…

  3. Tracking the time course of multi-word noun phrase production with ERPs or on when (and why) cat is faster than the big cat.

    Science.gov (United States)

    Bürki, Audrey; Laganaro, Marina

    2014-01-01

    Words are rarely produced in isolation. Yet, our understanding of multi-word production, and especially its time course, is still rather poor. In this research, we use event-related potentials to examine the production of multi-word noun phrases in the context of overt picture naming. We track the processing costs associated with the production of these noun phrases as compared with the production of bare nouns, from picture onset to articulation. Behavioral results revealed longer naming latencies for French noun phrases with determiners and pre-nominal adjectives (D-A-N, the big cat) than for noun phrases with a determiner (D-N, the cat), or bare nouns (N, cat). The spatio-temporal analysis of the ERPs revealed differences in the duration of stable global electrophysiological patterns as a function of utterance format in two time windows, from ~190 to 300 ms after picture onset, and from ~530 ms after picture onset to 100 ms before articulation. These findings can be accommodated in the following model. During grammatical encoding (here from ~190 to 300 ms), the noun and adjective lemmas are accessed in parallel, followed by the selection of the gender-agreeing determiner. Phonological encoding (after ~530 ms) operates sequentially. As a consequence, the phonological encoding process is longer for longer utterances. In addition, when determiners are repeated across trials, their phonological encoding can be anticipated or primed, resulting in a shortened encoding process.

  4. Tracking the time course of multi-word noun phrase production with ERPs or on when (and why cat is faster than the big cat

    Directory of Open Access Journals (Sweden)

    Audrey eBürki

    2014-07-01

    Full Text Available Words are rarely produced in isolation. Yet, our understanding of multi-word production, and especially its time course, is still rather poor. In this research, we use event-related potentials to examine the production of multi-word noun phrases in the context of overt picture naming. We track the processing costs associated with the production of these noun phrases as compared with the production of bare nouns, from picture onset to articulation. Behavioral results revealed longer naming latencies for French noun phrases with determiners and pre-nominal adjectives (D-A-N, the big cat than for noun phrases with a determiner (D-N, the cat or bare nouns (N, cat. The spatio-temporal analysis of the ERPs revealed differences in the duration of stable global electrophysiological patterns as a function of utterance format in two time windows, from ~190 ms to 300 ms after picture onset, and from ~530 ms after picture onset to 100 ms before articulation. These findings can be accommodated in the following model. During grammatical encoding (here from ~190 ms to 300 ms, the noun and adjective lemmas are accessed in parallel, followed by the selection of the gender-agreeing determiner. Phonological encoding (after ~530 ms operates sequentially. As a consequence, the phonological encoding process is longer for longer utterances. In addition, when determiners are repeated across trials, their phonological encoding can be anticipated or primed, resulting in a shortened encoding process.

  5. Effects of picture prompts on story retelling performance in typically developing children

    Directory of Open Access Journals (Sweden)

    Ana Carolina Sella

    2015-06-01

    Full Text Available Telling and retelling stories and facts are behavioral repertoires that are constantly recruited in social situations, no matter if these situations occur at school, with the family, or at leisure times. This study aimed at systematically evaluating if 11 first graders (age range six to seven, would perform better in retelling tasks when pictorial prompts were presented. Dependent variables were (a number of story categories inserted in the retelling tasks and (b number of retold words per story. The independent variable was the presentation of visual prompts during story retelling tasks. Results indicated that visual prompts did not result in consistent increase in performance when the number of story categories inserted was analyzed. Additionally, there was no consistent increase in the number of words retold when pictures were presented. Future studies should investigate whether repeated exposure to stories would result in a significant change in performance.

  6. Cascaded processing in written compound word production

    Directory of Open Access Journals (Sweden)

    Raymond eBertram

    2015-04-01

    Full Text Available In this study we investigated the intricate interplay between central linguistic processing and peripheral motor processes during typewriting. Participants had to typewrite two-constituent (noun-noun Finnish compounds in response to picture presentation while their typing behavior was registered. As dependent measures we used writing onset time to assess what processes were completed before writing and inter-key intervals to assess what processes were going on during writing. It was found that writing onset time was determined by whole word frequency rather than constituent frequencies, indicating that compound words are retrieved as whole orthographic units before writing is initiated. In addition, we found that the length of the first syllable also affects writing onset time, indicating that the first syllable is fully prepared before writing commences. The inter-key interval results showed that linguistic planning is not fully ready before writing, but cascades into the motor execution phase. More specifically, inter-key intervals were largest at syllable and morpheme boundaries, supporting the view that additional linguistic planning takes place at these boundaries. Bigram and trigram frequency also affected inter-key intervals with shorter intervals corresponding to higher frequencies. This can be explained by stronger memory traces for frequently co-occurring letter sequences in the motor memory for typewriting. These frequency effects were even larger in the second than in the first constituent, indicating that low-level motor memory starts to become more important during the course of writing compound words. We discuss our results in the light of current models of morphological processing and written word production.

  7. Cascaded processing in written compound word production.

    Science.gov (United States)

    Bertram, Raymond; Tønnessen, Finn Egil; Strömqvist, Sven; Hyönä, Jukka; Niemi, Pekka

    2015-01-01

    In this study we investigated the intricate interplay between central linguistic processing and peripheral motor processes during typewriting. Participants had to typewrite two-constituent (noun-noun) Finnish compounds in response to picture presentation while their typing behavior was registered. As dependent measures we used writing onset time to assess what processes were completed before writing and inter-key intervals to assess what processes were going on during writing. It was found that writing onset time was determined by whole word frequency rather than constituent frequencies, indicating that compound words are retrieved as whole orthographic units before writing is initiated. In addition, we found that the length of the first syllable also affects writing onset time, indicating that the first syllable is fully prepared before writing commences. The inter-key interval results showed that linguistic planning is not fully ready before writing, but cascades into the motor execution phase. More specifically, inter-key intervals were largest at syllable and morpheme boundaries, supporting the view that additional linguistic planning takes place at these boundaries. Bigram and trigram frequency also affected inter-key intervals with shorter intervals corresponding to higher frequencies. This can be explained by stronger memory traces for frequently co-occurring letter sequences in the motor memory for typewriting. These frequency effects were even larger in the second than in the first constituent, indicating that low-level motor memory starts to become more important during the course of writing compound words. We discuss our results in the light of current models of morphological processing and written word production.

  8. Towards an intelligent framework for multimodal affective data analysis.

    Science.gov (United States)

    Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin

    2015-03-01

    An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. (Re-)Examination of Multimodal Augmented Reality

    NARCIS (Netherlands)

    Rosa, N.E.; Werkhoven, P.J.; Hürst, W.O.

    2016-01-01

    The majority of augmented reality (AR) research has been concerned with visual perception, however the move towards multimodality is imminent. At the same time, there is no clear vision of what multimodal AR is. The purpose of this position paper is to consider possible ways of examining AR other

  10. Combinatorics on words Christoffel words and repetitions in words

    CERN Document Server

    Berstel, Jean; Reutenauer, Christophe; Saliola, Franco V

    2008-01-01

    The two parts of this text are based on two series of lectures delivered by Jean Berstel and Christophe Reutenauer in March 2007 at the Centre de Recherches Mathématiques, Montréal, Canada. Part I represents the first modern and comprehensive exposition of the theory of Christoffel words. Part II presents numerous combinatorial and algorithmic aspects of repetition-free words stemming from the work of Axel Thue-a pioneer in the theory of combinatorics on words. A beginner to the theory of combinatorics on words will be motivated by the numerous examples, and the large variety of exercises, which make the book unique at this level of exposition. The clean and streamlined exposition and the extensive bibliography will also be appreciated. After reading this book, beginners should be ready to read modern research papers in this rapidly growing field and contribute their own research to its development. Experienced readers will be interested in the finitary approach to Sturmian words that Christoffel words offe...

  11. Multimodal integration of anatomy and physiology classes: How instructors utilize multimodal teaching in their classrooms

    Science.gov (United States)

    McGraw, Gerald M., Jr.

    Multimodality is the theory of communication as it applies to social and educational semiotics (making meaning through the use of multiple signs and symbols). The term multimodality describes a communication methodology that includes multiple textual, aural, and visual applications (modes) that are woven together to create what is referred to as an artifact. Multimodal teaching methodology attempts to create a deeper meaning to course content by activating the higher cognitive areas of the student's brain, creating a more sustained retention of the information (Murray, 2009). The introduction of multimodality educational methodologies as a means to more optimally engage students has been documented within educational literature. However, studies analyzing the distribution and penetration into basic sciences, more specifically anatomy and physiology, have not been forthcoming. This study used a quantitative survey design to determine the degree to which instructors integrated multimodality teaching practices into their course curricula. The instrument used for the study was designed by the researcher based on evidence found in the literature and sent to members of three associations/societies for anatomy and physiology instructors: the Human Anatomy and Physiology Society; the iTeach Anatomy & Physiology Collaborate; and the American Physiology Society. Respondents totaled 182 instructor members of two- and four-year, private and public higher learning colleges collected from the three organizations collectively with over 13,500 members in over 925 higher learning institutions nationwide. The study concluded that the expansion of multimodal methodologies into anatomy and physiology classrooms is at the beginning of the process and that there is ample opportunity for expansion. Instructors continue to use lecture as their primary means of interaction with students. Email is still the major form of out-of-class communication for full-time instructors. Instructors with

  12. Multimodale trafiknet i GIS (Multimodal Traffic Network in GIS)

    DEFF Research Database (Denmark)

    Kronbak, Jacob; Brems, Camilla Riff

    1996-01-01

    The report introduces the use of multi-modal traffic networks within a geographical Information System (GIS). The necessary theory of modelling multi-modal traffic network is reviewed and applied to the ARC/INFO GIS by an explorative example.......The report introduces the use of multi-modal traffic networks within a geographical Information System (GIS). The necessary theory of modelling multi-modal traffic network is reviewed and applied to the ARC/INFO GIS by an explorative example....

  13. The Relationships among Cognitive Correlates and Irregular Word, Non-Word, and Word Reading

    Science.gov (United States)

    Abu-Hamour, Bashir; University, Mu'tah; Urso, Annmarie; Mather, Nancy

    2012-01-01

    This study explored four hypotheses: (a) the relationships among rapid automatized naming (RAN) and processing speed (PS) to irregular word, non-word, and word reading; (b) the predictive power of various RAN and PS measures, (c) the cognitive correlates that best predicted irregular word, non-word, and word reading, and (d) reading performance of…

  14. Effect of hearing loss on semantic access by auditory and audiovisual speech in children.

    Science.gov (United States)

    Jerger, Susan; Tye-Murray, Nancy; Damian, Markus F; Abdi, Hervé

    2013-01-01

    This research studied whether the mode of input (auditory versus audiovisual) influenced semantic access by speech in children with sensorineural hearing impairment (HI). Participants, 31 children with HI and 62 children with normal hearing (NH), were tested with the authors' new multimodal picture word task. Children were instructed to name pictures displayed on a monitor and ignore auditory or audiovisual speech distractors. The semantic content of the distractors was varied to be related versus unrelated to the pictures (e.g., picture distractor of dog-bear versus dog-cheese, respectively). In children with NH, picture-naming times were slower in the presence of semantically related distractors. This slowing, called semantic interference, is attributed to the meaning-related picture-distractor entries competing for selection and control of the response (the lexical selection by competition hypothesis). Recently, a modification of the lexical selection by competition hypothesis, called the competition threshold (CT) hypothesis, proposed that (1) the competition between the picture-distractor entries is determined by a threshold, and (2) distractors with experimentally reduced fidelity cannot reach the CT. Thus, semantically related distractors with reduced fidelity do not produce the normal interference effect, but instead no effect or semantic facilitation (faster picture naming times for semantically related versus unrelated distractors). Facilitation occurs because the activation level of the semantically related distractor with reduced fidelity (1) is not sufficient to exceed the CT and produce interference but (2) is sufficient to activate its concept, which then strengthens the activation of the picture and facilitates naming. This research investigated whether the proposals of the CT hypothesis generalize to the auditory domain, to the natural degradation of speech due to HI, and to participants who are children. Our multimodal picture word task allowed us

  15. Training of Perceptual Motor Skills in Multimodal Virtual Environments

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available Multimodal, immersive, virtual reality (VR techniques open new perspectives for perceptualmotor skill trainers. They also introduce new risks and dangers. This paper describes the benefits and pitfalls of multimodal training and the cognitive building blocks of a multimodal, VR training simulators.

  16. Multimodal processes scheduling in mesh-like network environment

    Directory of Open Access Journals (Sweden)

    Bocewicz Grzegorz

    2015-06-01

    Full Text Available Multimodal processes planning and scheduling play a pivotal role in many different domains including city networks, multimodal transportation systems, computer and telecommunication networks and so on. Multimodal process can be seen as a process partially processed by locally executed cyclic processes. In that context the concept of a Mesh-like Multimodal Transportation Network (MMTN in which several isomorphic subnetworks interact each other via distinguished subsets of common shared intermodal transport interchange facilities (such as a railway station, bus station or bus/tram stop as to provide a variety of demand-responsive passenger transportation services is examined. Consider a mesh-like layout of a passengers transport network equipped with different lines including buses, trams, metro, trains etc. where passenger flows are treated as multimodal processes. The goal is to provide a declarative model enabling to state a constraint satisfaction problem aimed at multimodal transportation processes scheduling encompassing passenger flow itineraries. Then, the main objective is to provide conditions guaranteeing solvability of particular transport lines scheduling, i.e. guaranteeing the right match-up of local cyclic acting bus, tram, metro and train schedules to a given passengers flow itineraries.

  17. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2012-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836

  18. Mediating multimodal environmental knowledge across animation techniques

    DEFF Research Database (Denmark)

    Maier, Carmen Daniela

    2011-01-01

    ://www.sustainlane.com/. The multimodal discourse analysis is meant to reveal how selection and representation of environmental knowledge about social actors, social actions, resources, time and space are influenced by animation techniques. Furthermore, in the context of this multimodal discourse analysis, their influence upon......The growing awareness of and concern about present environmental problems generate a proliferation of new forms of environmental discourses that are mediated in various ways. This chapter explores issues related to the ways in which environmental knowledge is multimodally communicated...

  19. Transcranial direct current stimulation (tDCS) modulation of picture naming and word reading: A meta-analysis of single session tDCS applied to healthy participants.

    Science.gov (United States)

    Westwood, Samuel J; Romani, Cristina

    2017-09-01

    Recent reviews quantifying the effects of single sessions of transcranial direct current stimulation (or tDCS) in healthy volunteers find only minor effects on cognition despite the popularity of this technique. Here, we wanted to quantify the effects of tDCS on language production tasks that measure word reading and picture naming. We reviewed 14 papers measuring tDCS effects across a total of 96 conditions to a) quantify effects of conventional stimulation on language regions (i.e., left hemisphere anodal tDCS administered to temporal/frontal areas) under normal conditions or under conditions of cognitive (semantic) interference; b) identify parameters which may moderate the size of the tDCS effect within conventional stimulation protocols (e.g., online vs offline, high vs. low current densities, and short vs. long durations), as well as within types of stimulation not typically explored by previous reviews (i.e., right hemisphere anodal tDCS or left/right hemisphere cathodal tDCS). In all analyses there was no significant effect of tDCS, but we did find a small but significant effect of time and duration of stimulation with stronger effects for offline stimulation and for shorter durations (tDCS and its poor efficacy in healthy participants. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Word Domain Disambiguation via Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.

    2006-06-04

    Word subject domains have been widely used to improve the perform-ance of word sense disambiguation al-gorithms. However, comparatively little effort has been devoted so far to the disambiguation of word subject do-mains. The few existing approaches have focused on the development of al-gorithms specific to word domain dis-ambiguation. In this paper we explore an alternative approach where word domain disambiguation is achieved via word sense disambiguation. Our study shows that this approach yields very strong results, suggesting that word domain disambiguation can be ad-dressed in terms of word sense disam-biguation with no need for special purpose algorithms.

  1. Penggunaan Model Pembelajaran Picture and Picture Untuk Meningkatkan Kemampuan Siswa Menulis Karangan

    Directory of Open Access Journals (Sweden)

    Heriyanto Heriyanto

    2014-02-01

    Full Text Available Menulis merupakan keterampilan berbahasa yang kompleks, produktif dan ekspresif, karena penulis harus terampil menggunakan grofologi, struktur bahasa dan memiliki pengetahuan bahasa yang memadai, untuk itu perlu dilatih secara teratur dan cermat sejak kelas awal SD. Karangan sebagai salah satu hasil karya menulis, merupakan hasil pekerjaan dari mengarang. Menulis karangan dalam penelitian ini adalah menulis karangan narasi. Salah satu masalah dalam pembelajaran Bahasa Indonesia adalah kesulitan siswa dalam menulis karangan yang baik dan benar, yang juga terjadi pada siswa kelas IVA SDN Pinggir Papas 1. Pembelajaran menulis karangan dengan menggunakan model kooperatif tipe picture and picture  diharapkan mampu meningkatkan pemahaman siswa dalam hal menulis karangan. Untuk itu, dilakukan penelitian terhadap siswa kelas IVA SDN Pinggir Papas I Kecamatan Kalianget Kabupaten Sumenep yang berjumlah 33 siswa. Penelitian dengan judul “Penggunaan Model Pembelajaran Kooperatif Tipe Picture and Picture untuk Meningkatkan Kemampuan Siswa Menulis Karangan” ini menggunakan penelitian tindakan kelas sebanyak dua putaran. Setiap putaran terdiri dari tahap rancangan, kegiatan, pengamatan, dan refleksi. Data yang diperoleh berupa nilai hasil LKS dan kuis individu berupa menulis karangan, lembar observasi aktivitas guru dan siswa serta penilaian penggunaan model pembelajaran kooperatif picture and picture, juga hasil respon siswa. Dari hasil analis menunjukkan bahwa penggunaan model pembelajaran kooperatif tipe picture and picture dapat meningkatkan kemampuan menulis karangan siswa kelas IVA SDN Pinggir Papas 1. Peningkatan ini terjadi pada nilai rata-rata LKS, yaitu dari 55 menjadi 71,6. Adapun hasil karangan individu siswa dari rata-rata 56,7 dengan ketuntasan 55% menjadi 74,5 dengan ketuntasan mencapai 88% atau terjadi peningkatan sebesar 33% dari siklus I.

  2. Naming, word identification and reading comprehension: Why is there a correlation, and what can it be used for?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    There is a well-established correlation between students’ reading skills and how quickly they can name letters and pictures. Naming speed before formal instruction can even predict later reading skills. But the cause of the correlation is unclear. The talk will summarize a series of studies showing...... that 1) what is being named (letters or pictures) is important for the correlation with different reading subskills (word identification or reading comprehension), 2) that naming is particularly useful in the prediction of reading speed, and 3) that naming is important for early identification of reading...

  3. Metaphor in pictures.

    Science.gov (United States)

    Kennedy, J M

    1982-01-01

    Pictures can be literal or metaphoric. Metaphoric pictures involve intended violations of standard modes of depiction that are universally recognizable. The types of metaphoric pictures correspond to major groups of verbal metaphors, with the addition of a class of pictorial runes. Often the correspondence between verbal and pictorial metaphors depends on individual features of objects and such physical parameters as change of scale. A more sophisticated analysis is required for some pictorial metaphors, involving juxtapositions of well-known objects and indirect reference.

  4. Viewing pictures of a romantic partner reduces experimental pain: involvement of neural reward systems.

    Science.gov (United States)

    Younger, Jarred; Aron, Arthur; Parke, Sara; Chatterjee, Neil; Mackey, Sean

    2010-10-13

    The early stages of a new romantic relationship are characterized by intense feelings of euphoria, well-being, and preoccupation with the romantic partner. Neuroimaging research has linked those feelings to activation of reward systems in the human brain. The results of those studies may be relevant to pain management in humans, as basic animal research has shown that pharmacologic activation of reward systems can substantially reduce pain. Indeed, viewing pictures of a romantic partner was recently demonstrated to reduce experimental thermal pain. We hypothesized that pain relief evoked by viewing pictures of a romantic partner would be associated with neural activations in reward-processing centers. In this functional magnetic resonance imaging (fMRI) study, we examined fifteen individuals in the first nine months of a new, romantic relationship. Participants completed three tasks under periods of moderate and high thermal pain: 1) viewing pictures of their romantic partner, 2) viewing pictures of an equally attractive and familiar acquaintance, and 3) a word-association distraction task previously demonstrated to reduce pain. The partner and distraction tasks both significantly reduced self-reported pain, although only the partner task was associated with activation of reward systems. Greater analgesia while viewing pictures of a romantic partner was associated with increased activity in several reward-processing regions, including the caudate head, nucleus accumbens, lateral orbitofrontal cortex, amygdala, and dorsolateral prefrontal cortex--regions not associated with distraction-induced analgesia. The results suggest that the activation of neural reward systems via non-pharmacologic means can reduce the experience of pain.

  5. Acoustic multimode interference and self-imaging phenomena realized in multimodal phononic crystal waveguides

    International Nuclear Information System (INIS)

    Zou, Qiushun; Yu, Tianbao; Liu, Jiangtao; Wang, Tongbiao; Liao, Qinghua; Liu, Nianhua

    2015-01-01

    We report an acoustic multimode interference effect and self-imaging phenomena in an acoustic multimode waveguide system which consists of M parallel phononic crystal waveguides (M-PnCWs). Results show that the self-imaging principle remains applicable for acoustic waveguides just as it does for optical multimode waveguides. To achieve the dispersions and replicas of the input acoustic waves produced along the propagation direction, we performed the finite element method on M-PnCWs, which support M guided modes within the target frequency range. The simulation results show that single images (including direct and mirrored images) and N-fold images (N is an integer) are identified along the propagation direction with asymmetric and symmetric incidence discussed separately. The simulated positions of the replicas agree well with the calculated values that are theoretically decided by self-imaging conditions based on the guided mode propagation analysis. Moreover, the potential applications based on this self-imaging effect for acoustic wavelength de-multiplexing and beam splitting in the acoustic field are also presented. (paper)

  6. Producing colour pictures from SCAN

    International Nuclear Information System (INIS)

    Robichaud, K.

    1982-01-01

    The computer code SCAN.TSK has been written for use on the Interdata 7/32 minicomputer which will convert the pictures produced by the SCAN program into colour pictures on a colour graphics VDU. These colour pictures are a more powerful aid to detecting errors in the MONK input data than the normal lineprinter pictures. This report is intended as a user manual for using the program on the Interdata 7/32, and describes the method used to produce the pictures and gives examples of JCL, input data and of the pictures that can be produced. (U.K.)

  7. Device for transmitting pictures and device for receiving said pictures

    NARCIS (Netherlands)

    1993-01-01

    Device for transmitting television pictures in the form of transform coefficients and motion vectors. The motion vectors of a sub-picture are converted (20) into a series of difference vectors and a reference vector. Said series is subsequently applied to a variable-length encoder (22) which encodes

  8. Forehearing words: Pre-activation of word endings at word onset.

    Science.gov (United States)

    Roll, Mikael; Söderström, Pelle; Frid, Johan; Mannfolk, Peter; Horne, Merle

    2017-09-29

    Occurring at rates up to 6-7 syllables per second, speech perception and understanding involves rapid identification of speech sounds and pre-activation of morphemes and words. Using event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI), we investigated the time-course and neural sources of pre-activation of word endings as participants heard the beginning of unfolding words. ERPs showed a pre-activation negativity (PrAN) for word beginnings (first two segmental phonemes) with few possible completions. PrAN increased gradually as the number of possible completions of word onsets decreased and the lexical frequency of the completions increased. The early brain potential effect for few possible word completions was associated with a blood-oxygen-level-dependent (BOLD) contrast increase in Broca's area (pars opercularis of the left inferior frontal gyrus) and angular gyrus of the left parietal lobe. We suggest early involvement of the left prefrontal cortex in inhibiting irrelevant left parietal activation during lexical selection. The results further our understanding of the importance of Broca's area in rapid online pre-activation of words. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  9. Spatiotemporal dynamics of word retrieval in speech production revealed by cortical high-frequency band activity.

    Science.gov (United States)

    Riès, Stephanie K; Dhillon, Rummit K; Clarke, Alex; King-Stephens, David; Laxer, Kenneth D; Weber, Peter B; Kuperman, Rachel A; Auguste, Kurtis I; Brunner, Peter; Schalk, Gerwin; Lin, Jack J; Parvizi, Josef; Crone, Nathan E; Dronkers, Nina F; Knight, Robert T

    2017-06-06

    Word retrieval is core to language production and relies on complementary processes: the rapid activation of lexical and conceptual representations and word selection, which chooses the correct word among semantically related competitors. Lexical and conceptual activation is measured by semantic priming. In contrast, word selection is indexed by semantic interference and is hampered in semantically homogeneous (HOM) contexts. We examined the spatiotemporal dynamics of these complementary processes in a picture naming task with blocks of semantically heterogeneous (HET) or HOM stimuli. We used electrocorticography data obtained from frontal and temporal cortices, permitting detailed spatiotemporal analysis of word retrieval processes. A semantic interference effect was observed with naming latencies longer in HOM versus HET blocks. Cortical response strength as indexed by high-frequency band (HFB) activity (70-150 Hz) amplitude revealed effects linked to lexical-semantic activation and word selection observed in widespread regions of the cortical mantle. Depending on the subsecond timing and cortical region, HFB indexed semantic interference (i.e., more activity in HOM than HET blocks) or semantic priming effects (i.e., more activity in HET than HOM blocks). These effects overlapped in time and space in the left posterior inferior temporal gyrus and the left prefrontal cortex. The data do not support a modular view of word retrieval in speech production but rather support substantial overlap of lexical-semantic activation and word selection mechanisms in the brain.

  10. Periodic words connected with the Fibonacci words

    Directory of Open Access Journals (Sweden)

    G. M. Barabash

    2016-06-01

    Full Text Available In this paper we introduce two families of periodic words (FLP-words of type 1 and FLP-words of type 2 that are connected with the Fibonacci words and investigated their properties.

  11. An Analysis of Differential Response Patterns on the Peabody Picture Vocabulary Test-IIIB in Struggling Adult Readers and Third-Grade Children

    Science.gov (United States)

    Pae, Hye K.; Greenberg, Daphne; Williams, Rihana S.

    2012-01-01

    This study examines the Peabody Picture Vocabulary Test-IIIB (PPVT-IIIB) performance of 130 adults identified as struggling readers, in comparison to 175 third-grade children. Response patterns to the items on the PPVT-IIIB by these two groups were investigated, focusing on items, semantic categories, and lexical features, including word length,…

  12. Polarization Characterization of a Multi-Moded Feed Structure

    Data.gov (United States)

    National Aeronautics and Space Administration — The Polarization Characterization of a Multi-Moded Feed Structure projects characterize the polarization response of a multi-moded feed horn as an innovative...

  13. Adaptive multimodal interaction in mobile augmented reality: A conceptual framework

    Science.gov (United States)

    Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A'isyah Ahmad

    2017-10-01

    Recently, Augmented Reality (AR) is an emerging technology in many mobile applications. Mobile AR was defined as a medium for displaying information merged with the real world environment mapped with augmented reality surrounding in a single view. There are four main types of mobile augmented reality interfaces and one of them are multimodal interfaces. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. In multimodal interface, many frameworks have been proposed to guide the designer to develop a multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal interface in mobile augmented reality. The main goal of this study is to propose a conceptual framework to illustrate the adaptive multimodal interface in mobile augmented reality. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and augmented reality. We analyzed the components in the previous frameworks and measure which can be applied in mobile devices. Our framework can be used as a guide for designers and developer to develop a mobile AR application with an adaptive multimodal interfaces.

  14. Filter. Remix. Make.: Cultivating Adaptability through Multimodality

    Science.gov (United States)

    Dusenberry, Lisa; Hutter, Liz; Robinson, Joy

    2015-01-01

    This article establishes traits of adaptable communicators in the 21st century, explains why adaptability should be a goal of technical communication educators, and shows how multimodal pedagogy supports adaptability. Three examples of scalable, multimodal assignments (infographics, research interviews, and software demonstrations) that evidence…

  15. Reading component skills in dyslexia: word recognition, comprehension and processing speed.

    Science.gov (United States)

    de Oliveira, Darlene G; da Silva, Patrícia B; Dias, Natália M; Seabra, Alessandra G; Macedo, Elizeu C

    2014-01-01

    The cognitive model of reading comprehension (RC) posits that RC is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the RC model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years) were divided in a Dyslexic Group (DG; 18 children, MA = 10.78, SD = 1.66) and control group (CG 22 children, MA = 10.59, SD = 1.86). All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and RC, word recognition, processing speed, picture naming, receptive vocabulary, and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and RC, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items) and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on RC test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  16. The power of pictures: Vertical picture angles in power pictures

    NARCIS (Netherlands)

    S.R. Giessner (Steffen); M.K. Ryan (Michelle); T.W. Schubert (Thomas); N. van Quaquebeke (Niels)

    2011-01-01

    textabstractAbstract: Conventional wisdom suggests that variations in vertical picture angle cause the subject to appear more powerful when depicted from below and less powerful when depicted from above. However, do the media actually use such associations to represent individual differences in

  17. Lesson 6. Picture unsharpness

    International Nuclear Information System (INIS)

    Chikirdin, Eh.G.

    1999-01-01

    Lecture concerning the picture sharpness in biomedical radiography is presented. Notion of picture sharpness and visual acuity as an analyser of picture sharpness is specified. Attention is paid to the POX-curve as a statistical method for assessment of visual acuity. Conceptions of the sensitivity of using X-ray image visualization system together with specificity and accuracy are considered. Among indices of sharp parameters of visualization system the resolution, resolving power, picture unsharpness are discussed. It is shown that gradation and sharp characteristics of the image closely correlate that need an attention in practice to factors determining them [ru

  18. Rapid word-learning in normal-hearing and hearing-impaired children: effects of age, receptive vocabulary, and high-frequency amplification.

    Science.gov (United States)

    Pittman, A L; Lewis, D E; Hoover, B M; Stelmachowicz, P G

    2005-12-01

    This study examined rapid word-learning in 5- to 14-year-old children with normal and impaired hearing. The effects of age and receptive vocabulary were examined as well as those of high-frequency amplification. Novel words were low-pass filtered at 4 kHz (typical of current amplification devices) and at 9 kHz. It was hypothesized that (1) the children with normal hearing would learn more words than the children with hearing loss, (2) word-learning would increase with age and receptive vocabulary for both groups, and (3) both groups would benefit from a broader frequency bandwidth. Sixty children with normal hearing and 37 children with moderate sensorineural hearing losses participated in this study. Each child viewed a 4-minute animated slideshow containing 8 nonsense words created using the 24 English consonant phonemes (3 consonants per word). Each word was repeated 3 times. Half of the 8 words were low-pass filtered at 4 kHz and half were filtered at 9 kHz. After viewing the story twice, each child was asked to identify the words from among pictures in the slide show. Before testing, a measure of current receptive vocabulary was obtained using the Peabody Picture Vocabulary Test (PPVT-III). The PPVT-III scores of the hearing-impaired children were consistently poorer than those of the normal-hearing children across the age range tested. A similar pattern of results was observed for word-learning in that the performance of the hearing-impaired children was significantly poorer than that of the normal-hearing children. Further analysis of the PPVT and word-learning scores suggested that although word-learning was reduced in the hearing-impaired children, their performance was consistent with their receptive vocabularies. Additionally, no correlation was found between overall performance and the age of identification, age of amplification, or years of amplification in the children with hearing loss. Results also revealed a small increase in performance for both

  19. Putting words on wine: OENOLEX Burgundy, new directions in wine lexicography

    DEFF Research Database (Denmark)

    Leroyer, Patrick

    2013-01-01

    OENOLEX Burgundy: New Directions in Specialised Lexicography The (meta)lexicography of wine encompasses the study and compiling of entries on the language of wine in general language dictionaries, or on the knowledge of wine in specialised dictionaries and encyclopedias. Also, although more rarely......, it encompasses the study and compiling of single-field dictionaries of the language and/or knowledge of wine. However, this is but a fraction of the lexicographic picture. The lexicography of wine also includes a broad range of lexicographically structured information tools on paper and online, such as wine...... Burgundy is an ongoing interdisplinary, international research project between specialised (meta)lexicographers, linguists, and wine experts. The project is co-financed by the Burgundy Wine Board and by the French region Burgundy. It is aimed at the development of new functions and multimodal usage modes...

  20. The Power of Pictures : Vertical Picture Angles in Power Pictures

    NARCIS (Netherlands)

    Giessner, Steffen R.; Ryan, Michelle K.; Schubert, Thomas W.; van Quaquebeke, Niels

    2011-01-01

    Conventional wisdom suggests that variations in vertical picture angle cause the subject to appear more powerful when depicted from below and less powerful when depicted from above. However, do the media actually use such associations to represent individual differences in power? We argue that the

  1. Digital Picture Production and Picture aesthetic Competency in It-didactic Design

    DEFF Research Database (Denmark)

    Rasmussen, Helle

    , that It and media are only used seldom by 21 % of teachers in Visual Arts and 7 % of teachers in this subject never use It and Media in these lessons. Art teachers – among others - also express the need for continuing education. (Ministeriet for Børn og Undervisning 2011). Since lessons in digital picture...... production have been a demand in Visual Arts in Danish schools for more than two decades, these conditions call for development of new didactic knowledge. Besides new genres and ways of using digital pictures and media continuously develop. (Sørensen 2002). This ought to be an incessant challenge...... subject Visual Arts – and crosswise of subjects in school. The overall research question has been: How can IT-didactic designs support lessons in production of complex meaning in digital pictures and increase the development of pupil’s picture aesthetic competences? By using the expression ‘complex...

  2. Multimodal Pedagogies for Teacher Education in TESOL

    Science.gov (United States)

    Yi, Youngjoo; Angay-Crowder, Tuba

    2016-01-01

    As a growing number of English language learners (ELLs) engage in digital and multimodal literacy practices in their daily lives, teachers are starting to incorporate multimodal approaches into their instruction. However, anecdotal and empirical evidence shows that teachers often feel unprepared for integrating such practices into their curricula…

  3. Pictures in Training

    Science.gov (United States)

    Miller, Elmo E.

    1973-01-01

    Pictures definitely seem to help training, but a study for the military finds these pictures need not be in moving form, such as films or videotape. Just how the pictorial techniques should be employed and with how much success depends on individual trainee and program differences. (KP)

  4. Comparison of the neural substrates mediating the semantic processing of Korean and English words using positron emission tomography

    International Nuclear Information System (INIS)

    Kim, Jea Jin; Kim, Myung Sun; Cho, Sang Soo; Kwon, Jun Soo; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Lee, Myung Chul

    2001-01-01

    This study was performed to search the relatively specific brain regions related to the semantic processing of Korean and English words on the one hand and the regions common to both on the other. Regional cerebral blood flow associated with different semantic tasks was examined using ( 15 O)H 2 O positron emission tomography in 13 healthy volunteers. The tasks consisted of semantic tasks for Korean words, semantic tasks for English words and control tasks using simple pictures. The regions specific and common to each language were identified by the relevant subtraction analysis using statistical parametric mapping. Common to the semantic processing of both words, the activation site was observed in the fusiform gyrus, particularly the left side. In addition, activation of the left inferior temporal gyrus was found only in the semantic processing of English words. The regions specific to Korean words were observed in multiple areas, including the right primary auditory cortex; whereas the regions specific to English words were limited to the right posterior visual area. Internal phonological process is engaged in performing the visual semantic task for Korean words of the high proficiency, whereas visual scanning plays an important role in performing the task for English words of the low proficiency

  5. Implications of Multimodal Learning Models for foreign language teaching and learning

    Directory of Open Access Journals (Sweden)

    Miguel Farías

    2011-04-01

    Full Text Available This literature review article approaches the topic of information and communications technologies from the perspective of their impact on the language learning process, with particular emphasis on the most appropriate designs of multimodal texts as informed by models of multimodal learning. The first part contextualizes multimodality within the fields of discourse studies, the psychology of learning and CALL; the second, deals with multimodal conceptions of reading and writing by discussing hypertextuality and literacy. A final section outlines the possible implications of multimodal learning models for foreign language teaching and learning.

  6. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    Directory of Open Access Journals (Sweden)

    Gayathri Rajagopal

    2015-01-01

    Full Text Available This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  7. Mariner Mars 1971 television picture catalog. Volume 2: Sequence design and picture coverage

    Science.gov (United States)

    Koskela, P. E.; Helton, M. R.; Seeley, L. N.; Zawacki, S. J.

    1972-01-01

    A collection of data relating to the Mariner 9 TV picture is presented. The data are arranged to offer speedy identification of what took place during entire science cycles, on individual revolutions, and during individual science links or sequences. Summary tables present the nominal design for each of the major picture-taking cycles, along with the sequences actually taken on each revolution. These tables permit identification at a glance, all TV sequences and the corresponding individual pictures for the first 262 revolutions (primary mission). A list of TV pictures, categorized according to their latitude and longitude, is also provided. Orthographic and/or mercator plots for all pictures, along with pertinent numerical data for their center points are presented. Other tables and plots of interest are also included. This document is based upon data contained in the Supplementary Experiment Data Record (SEDR) files as of 21 August 1972.

  8. Percorsi linguistici e semiotici: Critical Multimodal Analysis of Digital Discourse

    Directory of Open Access Journals (Sweden)

    edited by Ilaria Moschini

    2014-12-01

    Full Text Available The language section of LEA - edited by Ilaria Moschini - is dedicated to the Critical Multimodal Analysis of Digital Discourse, an approach that encompasses the linguistic and semiotic detailed investigation of texts within a socio-cultural perspective. It features an interview with Professor Theo van Leeuwen by Ilaria Moschini and four essays: “Retwitting, reposting, repinning; reshaping identities online: Towards a social semiotic multimodal analysis of digital remediation” by Elisabetta Adami; “Multimodal aspects of corporate social responsibility communication” by Carmen Daniela Maier; “Pervasive Technologies and the Paradoxes of Multimodal Digital Communication” by Sandra Petroni and “Can the powerless speak? Linguistic and multimodal corporate media manipulation in digital environments: the case of Malala Yousafzai” by Maria Grazia Sindoni. 

  9. Online learning from input versus offline memory evolution in adult word learning: effects of neighborhood density and phonologically related practice.

    Science.gov (United States)

    Storkel, Holly L; Bontempo, Daniel E; Pak, Natalie S

    2014-10-01

    In this study, the authors investigated adult word learning to determine how neighborhood density and practice across phonologically related training sets influence online learning from input during training versus offline memory evolution during no-training gaps. Sixty-one adults were randomly assigned to learn low- or high-density nonwords. Within each density condition, participants were trained on one set of words and then were trained on a second set of words, consisting of phonological neighbors of the first set. Learning was measured in a picture-naming test. Data were analyzed using multilevel modeling and spline regression. Steep learning during input was observed, with new words from dense neighborhoods and new words that were neighbors of recently learned words (i.e., second-set words) being learned better than other words. In terms of memory evolution, large and significant forgetting was observed during 1-week gaps in training. Effects of density and practice during memory evolution were opposite of those during input. Specifically, forgetting was greater for high-density and second-set words than for low-density and first-set words. High phonological similarity, regardless of source (i.e., known words or recent training), appears to facilitate online learning from input but seems to impede offline memory evolution.

  10. Using lexical variables to predict picture-naming errors in jargon aphasia

    Directory of Open Access Journals (Sweden)

    Catherine Godbold

    2015-04-01

    Full Text Available Introduction Individuals with jargon aphasia produce fluent output which often comprises high proportions of non-word errors (e.g., maf for dog. Research has been devoted to identifying the underlying mechanisms behind such output. Some accounts posit a reduced flow of spreading activation between levels in the lexical network (e.g., Robson et al., 2003. If activation level differences across the lexical network are a cause of non-word outputs, we would predict improved performance when target items reflect an increased flow of activation between levels (e.g. more frequently-used words are often represented by higher resting levels of activation. This research investigates the effect of lexical properties of targets (e.g., frequency, imageability on accuracy, error type (real word vs. non-word and target-error overlap of non-word errors in a picture naming task by individuals with jargon aphasia. Method Participants were 17 individuals with Wernicke’s aphasia, who produced a high proportion of non-word errors (>20% of errors on the Philadelphia Naming Test (PNT; Roach et al., 1996. The data were retrieved from the Moss Aphasic Psycholinguistic Database Project (MAPPD, Mirman et al., 2010. We used a series of mixed models to test whether lexical variables predicted accuracy, error type (real word vs. non-word and target-error overlap for the PNT data. As lexical variables tend to be highly correlated, we performed a principal components analysis to reduce the variables into five components representing variables associated with phonology (length, phonotactic probability, neighbourhood density and neighbourhood frequency, semantics (imageability and concreteness, usage (frequency and age-of-acquisition, name agreement and visual complexity. Results and Discussion Table 1 shows the components that made a significant contribution to each model. Individuals with jargon aphasia produced more correct responses and fewer non-word errors relative to

  11. Early Parallel Activation of Semantics and Phonology in Picture Naming: Evidence from a Multiple Linear Regression MEG Study.

    Science.gov (United States)

    Miozzo, Michele; Pulvermüller, Friedemann; Hauk, Olaf

    2015-10-01

    The time course of brain activation during word production has become an area of increasingly intense investigation in cognitive neuroscience. The predominant view has been that semantic and phonological processes are activated sequentially, at about 150 and 200-400 ms after picture onset. Although evidence from prior studies has been interpreted as supporting this view, these studies were arguably not ideally suited to detect early brain activation of semantic and phonological processes. We here used a multiple linear regression approach to magnetoencephalography (MEG) analysis of picture naming in order to investigate early effects of variables specifically related to visual, semantic, and phonological processing. This was combined with distributed minimum-norm source estimation and region-of-interest analysis. Brain activation associated with visual image complexity appeared in occipital cortex at about 100 ms after picture presentation onset. At about 150 ms, semantic variables became physiologically manifest in left frontotemporal regions. In the same latency range, we found an effect of phonological variables in the left middle temporal gyrus. Our results demonstrate that multiple linear regression analysis is sensitive to early effects of multiple psycholinguistic variables in picture naming. Crucially, our results suggest that access to phonological information might begin in parallel with semantic processing around 150 ms after picture onset. © The Author 2014. Published by Oxford University Press.

  12. Fiber-Optic Vibration Sensor Based on Multimode Fiber

    Directory of Open Access Journals (Sweden)

    I. Lujo

    2008-06-01

    Full Text Available The purpose of this paper is to present a fiberoptic vibration sensor based on the monitoring of the mode distribution in a multimode optical fiber. Detection of vibrations and their parameters is possible through observation of the output speckle pattern from the multimode optical fiber. A working experimental model has been built in which all used components are widely available and cheap: a CCD camera (a simple web-cam, a multimode laser in visible range as a light source, a length of multimode optical fiber, and a computer for signal processing. Measurements have shown good agreement with the actual frequency of vibrations, and promising results were achieved with the amplitude measurements although they require some adaptation of the experimental model. Proposed sensor is cheap and lightweight and therefore presents an interesting alternative for monitoring large smart structures.

  13. Video genre classification using multimodal features

    Science.gov (United States)

    Jin, Sung Ho; Bae, Tae Meon; Choo, Jin Ho; Ro, Yong Man

    2003-12-01

    We propose a video genre classification method using multimodal features. The proposed method is applied for the preprocessing of automatic video summarization or the retrieval and classification of broadcasting video contents. Through a statistical analysis of low-level and middle-level audio-visual features in video, the proposed method can achieve good performance in classifying several broadcasting genres such as cartoon, drama, music video, news, and sports. In this paper, we adopt MPEG-7 audio-visual descriptors as multimodal features of video contents and evaluate the performance of the classification by feeding the features into a decision tree-based classifier which is trained by CART. The experimental results show that the proposed method can recognize several broadcasting video genres with a high accuracy and the classification performance with multimodal features is superior to the one with unimodal features in the genre classification.

  14. Do handwritten words magnify lexical effects in visual word recognition?

    Science.gov (United States)

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  15. Statistical Laws Governing Fluctuations in Word Use from Word Birth to Word Death

    Science.gov (United States)

    Petersen, Alexander M.; Tenenbaum, Joel; Havlin, Shlomo; Stanley, H. Eugene

    2012-03-01

    We analyze the dynamic properties of 107 words recorded in English, Spanish and Hebrew over the period 1800-2008 in order to gain insight into the coevolution of language and culture. We report language independent patterns useful as benchmarks for theoretical models of language evolution. A significantly decreasing (increasing) trend in the birth (death) rate of words indicates a recent shift in the selection laws governing word use. For new words, we observe a peak in the growth-rate fluctuations around 40 years after introduction, consistent with the typical entry time into standard dictionaries and the human generational timescale. Pronounced changes in the dynamics of language during periods of war shows that word correlations, occurring across time and between words, are largely influenced by coevolutionary social, technological, and political factors. We quantify cultural memory by analyzing the long-term correlations in the use of individual words using detrended fluctuation analysis.

  16. Concepts for space nuclear multi-mode reactors

    International Nuclear Information System (INIS)

    Myrabo, L.; Botts, T.E.; Powell, J.R.

    1983-01-01

    A number of nuclear multi-mode reactor power plants are conceptualized for use with solid core, fixed particle bed and rotating particle bed reactors. Multi-mode systems generate high peak electrical power in the open cycle mode, with MHD generator or turbogenerator converters and cryogenically stored coolants. Low level stationkeeping power and auxiliary reactor cooling (i.e., for the removal of reactor afterheat) are provided in a closed cycle mode. Depending on reactor design, heat transfer to the low power converters can be accomplished by heat pipes, liquid metal coolants or high pressure gas coolants. Candidate low power conversion cycles include Brayton turbogenerator, Rankine turbogenerator, thermoelectric and thermionic approaches. A methodology is suggested for estimating the system mass of multi-mode nuclear power plants as a function of peak electric power level and required mission run time. The masses of closed cycle nuclear and open cycle chemical power systems are briefly examined to identify the regime of superiority for nuclear multi-mode systems. Key research and technology issues for such power plants are also identified

  17. Reading component skills in dyslexia: word recognition, comprehension and processing speed

    Directory of Open Access Journals (Sweden)

    Darlene Godoy Oliveira

    2014-11-01

    Full Text Available The cognitive model of reading comprehension posits that reading comprehension is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the reading comprehension model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years were divided in a Dyslexic Group (DG, 18 children, MA = 10.78, SD = 1.66 and Control Group (CG 22 children, MA = 10.59, SD = 1.86. All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and reading comprehension, word recognition, processing speed, picture naming, receptive vocabulary and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and reading comprehension, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on reading comprehension test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  18. Get the picture? The effects of iconicity on toddlers' reenactment from picture books.

    Science.gov (United States)

    Simcock, Gabrielle; DeLoache, Judy

    2006-11-01

    What do toddlers learn from everyday picture-book reading interactions? To date, there has been scant research exploring this question. In this study, the authors adapted a standard imitation procedure to examine 18- to 30-month-olds' ability to learn how to reenact a novel action sequence from a picture book. The results provide evidence that toddlers can imitate specific target actions on novel real-world objects on the basis of a picture-book interaction. Children's imitative performance after the reading interaction varied both as a function of age and the level of iconicity of the pictures in the book. These findings are discussed in terms of children's emerging symbolic capacity and the flexibility of the cognitive representation.

  19. Multimode optical fibers: steady state mode exciter.

    Science.gov (United States)

    Ikeda, M; Sugimura, A; Ikegami, T

    1976-09-01

    The steady state mode power distribution of the multimode graded index fiber was measured. A simple and effective steady state mode exciter was fabricated by an etching technique. Its insertion loss was 0.5 dB for an injection laser. Deviation in transmission characteristics of multimode graded index fibers can be avoided by using the steady state mode exciter.

  20. Comprehension of concrete and abstract words in semantic dementia

    Science.gov (United States)

    Jefferies, Elizabeth; Patterson, Karalyn; Jones, Roy W.; Lambon Ralph, Matthew A.

    2009-01-01

    The vast majority of brain-injured patients with semantic impairment have better comprehension of concrete than abstract words. In contrast, several patients with semantic dementia (SD), who show circumscribed atrophy of the anterior temporal lobes bilaterally, have been reported to show reverse imageability effects, i.e., relative preservation of abstract knowledge. Although these reports largely concern individual patients, some researchers have recently proposed that superior comprehension of abstract concepts is a characteristic feature of SD. This would imply that the anterior temporal lobes are particularly crucial for processing sensory aspects of semantic knowledge, which are associated with concrete not abstract concepts. However, functional neuroimaging studies of healthy participants do not unequivocally predict reverse imageability effects in SD because the temporal poles sometimes show greater activation for more abstract concepts. We examined a case-series of eleven SD patients on a synonym judgement test that orthogonally varied the frequency and imageability of the items. All patients had higher success rates for more imageable as well as more frequent words, suggesting that (a) the anterior temporal lobes underpin semantic knowledge for both concrete and abstract concepts, (b) more imageable items – perhaps due to their richer multimodal representations – are typically more robust in the face of global semantic degradation and (c) reverse imageability effects are not a characteristic feature of SD. PMID:19586212