WorldWideScience

Sample records for salient visual cue

  1. Preschoolers Benefit from Visually Salient Speech Cues

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2015-01-01

    Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…

  2. Salient cues improve prospective remembering in Korsakoff's syndrome.

    Science.gov (United States)

    Altgassen, Mareike; Ariese, Laura; Wester, Arie J; Kessels, Roy P C

    2016-06-01

    Korsakoff's syndrome is characterized by deficits in episodic memory and executive functions. Both cognitive functions are needed to remember to execute delayed intentions (prospective memory, PM), an ability that is crucial for independent living in everyday life. So far, PM has only been targeted by one study in Korsakoff's syndrome. This study explored the effects of executive control demands on PM to shed further light on a possible interdependence of memory and executive functions in Korsakoff's syndrome, Twenty-five individuals with Korsakoff's syndrome and 23 chronic alcoholics (without amnesia) performed a categorization task into which a PM task was embedded that put either high or low demands on executive control processes (using low vs. high salient cues). Overall, Korsakoff patients had fewer PM hits than alcoholic controls. Across groups, participants had fewer PM hits when cues were low salient as compared to high salient. Korsakoff patients performed better on PM when highly salient cues were presented than cues of low salience, while there were no differential effects for alcoholic controls. While overall Korsakoff patients' showed a global PM deficit, the extent of this deficit was moderated by the executive control demands of the task applied. This indicated further support for an interrelation of executive functions and memory performance in Korsakoff. Positive clinical implications of the work Prospective memory (PM) performance in Korsakoff's syndrome is related to executive control load. Increasing cues' salience improves PM performance in Korsakoff's syndrome. Salient visual aids may be used in everyday life to improve Korsakoff individuals' planning and organization skills. Cautions or limitations of the study Results were obtained in a structured laboratory setting and need to be replicated in a more naturalistic setting to assess their transferability to everyday life. Given the relatively small sample size, individual predictors of PM

  3. Most people do not ignore salient invalid cues in memory-based decisions.

    Science.gov (United States)

    Platzer, Christine; Bröder, Arndt

    2012-08-01

    Former experimental studies have shown that decisions from memory tend to rely only on a few cues, following simple noncompensatory heuristics like "take the best." However, it has also repeatedly been demonstrated that a pictorial, as opposed to a verbal, representation of cue information fosters the inclusion of more cues in compensatory strategies, suggesting a facilitated retrieval of cue patterns. These studies did not properly control for visual salience of cues, however. In the experiment reported here, the cue salience hierarchy established in a pilot study was either congruent or incongruent with the validity order of the cues. Only the latter condition increased compensatory decision making, suggesting that the apparent representational format effect is, rather, a salience effect: Participants automatically retrieve and incorporate salient cues irrespective of their validity. Results are discussed with respect to reaction time data.

  4. Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors

    Science.gov (United States)

    Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.

    2014-01-01

    With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…

  5. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue

    Directory of Open Access Journals (Sweden)

    Ashley J Booth

    2015-06-01

    Full Text Available The ease of synchronising movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronising with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g. a dot following an oscillatory trajectory. Similarly, when synchronising with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals’ ability to synchronise movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centred on a large projection screen. The target dot was surrounded by 2, 8 or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100 or 200ms. We found participants’ timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14. This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronise movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  6. The flexible engagement of monitoring processes in non-focal and focal prospective memory tasks with salient cues.

    Science.gov (United States)

    Hefer, Carmen; Cohen, Anna-Lisa; Jaudas, Alexander; Dreisbach, Gesine

    2017-09-01

    Prospective memory (PM) refers to the ability to remember to perform a delayed intention. Here, we aimed to investigate the ability to suspend such an intention and thus to confirm previous findings (Cohen, Gordon, Jaudas, Hefer, & Dreisbach, 2016) demonstrating the ability to flexibly engage in monitoring processes. In the current study, we presented a perceptually salient PM cue (bold and red) to rule out that previous findings were limited to non-salient and, thus, easy to ignore PM cues. Moreover, we used both a non-focal (Experiment 1) and a focal PM (Experiment 2) cue. In both Experiments, three groups of participants performed an Eriksen flanker task as an ongoing task with an embedded PM task (they had to remember to press the F1 key if a pre-specified cue appeared). Participants were assigned to either a control condition (performed solely the flanker task), a standard PM condition (performed the flanker task along with the PM task), or a PM delayed condition (performed the flanker task but were instructed to postpone their PM task intention). The results of Experiment 1 with the non-focal PM cue closely replicated those of Cohen et al. (2016) and confirmed that participants were able to successfully postpone the PM cue intention without additional costs even when the PM cue was a perceptually salient one. However, when the PM cue was focal (Experiment 2), it was much more difficult for participants to ignore it as evidenced by commission errors and slower latencies on PM cue trials. In sum, results showed that the focality of the PM cue plays a more crucial role in the flexibility of the monitoring process whereas the saliency of the PM cue does not. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Visual form Cues, Biological Motions, Auditory Cues, and Even Olfactory Cues Interact to Affect Visual Sex Discriminations

    OpenAIRE

    Rick Van Der Zwan; Anna Brooks; Duncan Blair; Coralia Machatch; Graeme Hacker

    2011-01-01

    Johnson and Tassinary (2005) proposed that visually perceived sex is signalled by structural or form cues. They suggested also that biological motion cues signal sex, but do so indirectly. We previously have shown that auditory cues can mediate visual sex perceptions (van der Zwan et al., 2009). Here we demonstrate that structural cues to body shape are alone sufficient for visual sex discriminations but that biological motion cues alone are not. Interestingly, biological motions can resolve ...

  8. Remote Sensing of Martian Terrain Hazards via Visually Salient Feature Detection

    Science.gov (United States)

    Al-Milli, S.; Shaukat, A.; Spiteri, C.; Gao, Y.

    2014-04-01

    The main objective of the FASTER remote sensing system is the detection of rocks on planetary surfaces by employing models that can efficiently characterise rocks in terms of semantic descriptions. The proposed technique abates some of the algorithmic limitations of existing methods with no training requirements, lower computational complexity and greater robustness towards visual tracking applications over long-distance planetary terrains. Visual saliency models inspired from biological systems help to identify important regions (such as rocks) in the visual scene. Surface rocks are therefore completely described in terms of their local or global conspicuity pop-out characteristics. These local and global pop-out cues are (but not limited to); colour, depth, orientation, curvature, size, luminance intensity, shape, topology etc. The currently applied methods follow a purely bottom-up strategy of visual attention for selection of conspicuous regions in the visual scene without any topdown control. Furthermore the choice of models used (tested and evaluated) are relatively fast among the state-of-the-art and have very low computational load. Quantitative evaluation of these state-ofthe- art models was carried out using benchmark datasets including the Surrey Space Centre Lab Testbed, Pangu generated images, RAL Space SEEKER and CNES Mars Yard datasets. The analysis indicates that models based on visually salient information in the frequency domain (SRA, SDSR, PQFT) are the best performing ones for detecting rocks in an extra-terrestrial setting. In particular the SRA model seems to be the most optimum of the lot especially that it requires the least computational time while keeping errors competitively low. The salient objects extracted using these models can then be merged with the Digital Elevation Models (DEMs) generated from the same navigation cameras in order to be fused to the navigation map thus giving a clear indication of the rock locations.

  9. Attentional Capture by Salient Distractors during Visual Search Is Determined by Temporal Task Demands

    DEFF Research Database (Denmark)

    Kiss, Monika; Grubert, Anna; Petersen, Anders

    2012-01-01

    The question whether attentional capture by salient but taskirrelevant visual stimuli is triggered in a bottom–up fashion or depends on top–down task settings is still unresolved. Strong support for bottom–up capture was obtained in the additional singleton task, in which search arrays were visible...... until response onset. Equally strong evidence for top–down control of attentional capture was obtained in spatial cueing experiments in which display durations were very brief. To demonstrate the critical role of temporal task demands on salience-driven attentional capture, we measured ERP indicators...... component that was followed by a late Pd component, suggesting that they triggered attentional capture, which was later replaced by location-specific inhibition. When search arrays were visible for only 200 msec, the distractor-elicited N2pc was eliminated and was replaced by a Pd component in the same time...

  10. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  11. Hierarchical acquisition of visual specificity in spatial contextual cueing.

    Science.gov (United States)

    Lie, Kin-Pou

    2015-01-01

    Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.

  12. Visual cues given by humans are not sufficient for Asian elephants (Elephas maximus to find hidden food.

    Directory of Open Access Journals (Sweden)

    Joshua M Plotnik

    Full Text Available Recent research suggests that domesticated species--due to artificial selection by humans for specific, preferred behavioral traits--are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses and wild (including wolves and chimpanzees animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7 in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants' inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation.

  13. Visual cues given by humans are not sufficient for Asian elephants (Elephas maximus) to find hidden food.

    Science.gov (United States)

    Plotnik, Joshua M; Pokorny, Jennifer J; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F; Hennessy, Alice; Hill, James; Hill, Virginia J; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L; Morrison, Violet M B; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K; Ylli, Dora; Clayton, Nicola S; Roberts, John; Fure, Emilie L; Duchatelier, Alicia P; Getz, David

    2013-01-01

    Recent research suggests that domesticated species--due to artificial selection by humans for specific, preferred behavioral traits--are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants' inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation.

  14. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    Science.gov (United States)

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  15. Visual cue-specific craving is diminished in stressed smokers.

    Science.gov (United States)

    Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R

    2017-09-01

    Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.

  16. Visual cues for data mining

    Science.gov (United States)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  17. Enhancing L2 Vocabulary Acquisition through Implicit Reading Support Cues in E-books

    Science.gov (United States)

    Liu, Yeu-Ting; Leveridge, Aubrey Neil

    2017-01-01

    Various explicit reading support cues, such as gloss, QR codes and hypertext annotation, have been embedded in e-books designed specifically for fostering various aspects of language development. However, explicit visual cues are not always reliably perceived as salient or effective by language learners. The current study explored the efficacy of…

  18. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    Science.gov (United States)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  19. Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.

    Science.gov (United States)

    Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne

    2016-05-01

    We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.

  20. Getting more from visual working memory: Retro-cues enhance retrieval and protect from visual interference.

    Science.gov (United States)

    Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus

    2016-06-01

    Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Dissociable Fronto-Operculum-Insula Control Signals for Anticipation and Detection of Inhibitory Sensory Cue.

    Science.gov (United States)

    Cai, Weidong; Chen, Tianwen; Ide, Jaime S; Li, Chiang-Shan R; Menon, Vinod

    2017-08-01

    The ability to anticipate and detect behaviorally salient stimuli is important for virtually all adaptive behaviors, including inhibitory control that requires the withholding of prepotent responses when instructed by external cues. Although right fronto-operculum-insula (FOI), encompassing the anterior insular cortex (rAI) and inferior frontal cortex (rIFC), involvement in inhibitory control is well established, little is known about signaling mechanisms underlying their differential roles in detection and anticipation of salient inhibitory cues. Here we use 2 independent functional magnetic resonance imaging data sets to investigate dynamic causal interactions of the rAI and rIFC, with sensory cortex during detection and anticipation of inhibitory cues. Across 2 different experiments involving auditory and visual inhibitory cues, we demonstrate that primary sensory cortex has a stronger causal influence on rAI than on rIFC, suggesting a greater role for the rAI in detection of salient inhibitory cues. Crucially, a Bayesian prediction model of subjective trial-by-trial changes in inhibitory cue anticipation revealed that the strength of causal influences from rIFC to rAI increased significantly on trials in which participants had higher anticipation of inhibitory cues. Together, these results demonstrate the dissociable bottom-up and top-down roles of distinct FOI regions in detection and anticipation of behaviorally salient cues across multiple sensory modalities. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Appraisals of Salient Visual Elements in Web Page Design

    Directory of Open Access Journals (Sweden)

    Johanna M. Silvennoinen

    2016-01-01

    Full Text Available Visual elements in user interfaces elicit emotions in users and are, therefore, essential to users interacting with different software. Although there is research on the relationship between emotional experience and visual user interface design, the focus has been on the overall visual impression and not on visual elements. Additionally, often in a software development process, programming and general usability guidelines are considered as the most important parts of the process. Therefore, knowledge of programmers’ appraisals of visual elements can be utilized to understand the web page designs we interact with. In this study, appraisal theory of emotion is utilized to elaborate the relationship of emotional experience and visual elements from programmers’ perspective. Participants (N=50 used 3E-templates to express their visual and emotional experiences of web page designs. Content analysis of textual data illustrates how emotional experiences are elicited by salient visual elements. Eight hierarchical visual element categories were found and connected to various emotions, such as frustration, boredom, and calmness, via relational emotion themes. The emotional emphasis was on centered, symmetrical, and balanced composition, which was experienced as pleasant and calming. The results benefit user-centered visual interface design and researchers of visual aesthetics in human-computer interaction.

  3. Overshadowing of geometric cues by a beacon in a spatial navigation task.

    Science.gov (United States)

    Redhead, Edward S; Hamilton, Derek A; Parker, Matthew O; Chan, Wai; Allison, Craig

    2013-06-01

    In three experiments, we examined whether overshadowing of geometric cues by a discrete landmark (beacon) is due to the relative saliences of the cues. Using a virtual water maze task, human participants were required to locate a platform marked by a beacon in a distinctively shaped pool. In Experiment 1, the beacon overshadowed geometric cues in a trapezium, but not in an isosceles triangle. The longer escape latencies during acquisition in the trapezium control group with no beacon suggest that the geometric cues in the trapezium were less salient than those in the triangle. In Experiment 2, we evaluated whether generalization decrement, caused by the removal of the beacon at test, could account for overshadowing. An additional beacon was placed in an alternative corner. For the control groups, the beacons were identical; for the overshadow groups, they were visually unique. Overshadowing was again found in the trapezium. In Experiment 3, we tested whether the absence of overshadowing in the triangle was due to the geometric cues being more salient than the beacon. Following training, the beacon was relocated to a different corner. Participants approached the beacon rather than the trained platform corner, suggesting that the beacon was more salient. These results suggest that associative processes do not fully explain cue competition in the spatial domain.

  4. Cue-induced craving among inhalant users: Development and preliminary validation of a visual cue paradigm.

    Science.gov (United States)

    Jain, Shobhit; Dhawan, Anju; Kumaran, S Senthil; Pattanayak, Raman Deep; Jain, Raka

    2017-12-01

    Cue-induced craving is known to be associated with a higher risk of relapse, wherein drug-specific cues become conditioned stimuli, eliciting conditioned responses. Cue-reactivity paradigm are important tools to study psychological responses and functional neuroimaging changes. However, till date, there has been no specific study or a validated paradigm for inhalant cue-induced craving research. The study aimed to develop and validate visual cue stimulus for inhalant cue-associated craving. The first step (picture selection) involved screening and careful selection of 30 cue- and 30 neutral-pictures based on their relevance for naturalistic settings. In the second step (time optimization), a random selection of ten cue-pictures each was presented for 4s, 6s, and 8s to seven adolescent male inhalant users, and pre-post craving response was compared using a Visual Analogue Scale(VAS) for each of the picture and time. In the third step (validation), craving response for each of 30 cue- and 30 neutral-pictures were analysed among 20 adolescent inhalant users. Findings revealed a significant difference in before and after craving response for the cue-pictures, but not neutral-pictures. Using ROC-curve, pictures were arranged in order of craving intensity. Finally, 20 best cue- and 20 neutral-pictures were used for the development of a 480s visual cue paradigm. This is the first study to systematically develop an inhalant cue picture paradigm which can be used as a tool to examine cue induced craving in neurobiological studies. Further research, including its further validation in larger study and diverse samples, is required. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    Science.gov (United States)

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  6. Retro-dimension-cue benefit in visual working memory

    OpenAIRE

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-01-01

    In visual working memory (VWM) tasks, participants? performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants? performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased prob...

  7. Salient Region Detection by Fusing Foreground and Background Cues Extracted from Single Image

    Directory of Open Access Journals (Sweden)

    Qiangqiang Zhou

    2016-01-01

    Full Text Available Saliency detection is an important preprocessing step in many application fields such as computer vision, robotics, and graphics to reduce computational cost by focusing on significant positions and neglecting the nonsignificant in the scene. Different from most previous methods which mainly utilize the contrast of low-level features, various feature maps are fused in a simple linear weighting form. In this paper, we propose a novel salient object detection algorithm which takes both background and foreground cues into consideration and integrate a bottom-up coarse salient regions extraction and a top-down background measure via boundary labels propagation into a unified optimization framework to acquire a refined saliency detection result. Wherein the coarse saliency map is also fused by three components, the first is local contrast map which is in more accordance with the psychological law, the second is global frequency prior map, and the third is global color distribution map. During the formation of background map, first we construct an affinity matrix and select some nodes which lie on border as labels to represent the background and then carry out a propagation to generate the regional background map. The evaluation of the proposed model has been implemented on four datasets. As demonstrated in the experiments, our proposed method outperforms most existing saliency detection models with a robust performance.

  8. Visual cues and listening effort: individual variability.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  9. Heuristics of Reasoning and Analogy in Children's Visual Perspective Taking.

    Science.gov (United States)

    Yaniv, Ilan; Shatz, Marilyn

    1990-01-01

    In three experiments, children of three through six years of age were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight was salient. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed to objects facilitated…

  10. Subconscious visual cues during movement execution allow correct online choice reactions.

    Directory of Open Access Journals (Sweden)

    Christian Leukel

    Full Text Available Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28 ± 5 years executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition or had to abort the movement (stop-condition. The cue was presented with different display durations (20-160 ms. In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task.

  11. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  12. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Science.gov (United States)

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  13. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Directory of Open Access Journals (Sweden)

    Gary Lupyan

    Full Text Available BACKGROUND: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. METHODOLOGY/PRINCIPAL FINDINGS: Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'. A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. CONCLUSIONS/SIGNIFICANCE: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  14. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    OpenAIRE

    Jesse, A.; McQueen, J.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker...

  15. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  16. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    Science.gov (United States)

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  17. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    NARCIS (Netherlands)

    Jesse, A.; McQueen, J.M.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes

  18. First-Pass Processing of Value Cues in the Ventral Visual Pathway.

    Science.gov (United States)

    Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E

    2018-02-19

    Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Competition between auditory and visual spatial cues during visual task performance

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.

    2009-01-01

    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is

  20. Visual Sonority Modulates Infants' Attraction to Sign Language

    Science.gov (United States)

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  1. Head orientation of walking blowflies is controlled by visual and mechanical cues

    OpenAIRE

    Monteagudo Ibarreta, José; Lindemann, Jens Peter; Egelhaaf, Martin

    2017-01-01

    During locomotion, animals employ visual and mechanical cues in order to establish the orientation of their head, which reflects the orientation of the visual coordinate system. However, in certain situations, contradictory cues may suggest different orientations relative to the environment. We recorded blowflies walking on a horizontal or tilted surface surrounded by visual cues suggesting a variety of orientations.We found that the different orientations relative to gra...

  2. Red to green or fast to slow? Infants' visual working memory for "just salient differences".

    Science.gov (United States)

    Kaldy, Zsuzsa; Blaser, Erik

    2013-01-01

    In this study, 6-month-old infants' visual working memory for a static feature (color) and a dynamic feature (rotational motion) was compared. Comparing infants' use of different features can only be done properly if experimental manipulations to those features are equally salient (Kaldy & Blaser, 2009; Kaldy, Blaser, & Leslie, 2006). The interdimensional salience mapping method was used to find two objects that each were one Just Salient Difference from a common baseline object (N = 16). These calibrated stimuli were then used in a subsequent two-alternative forced-choice preferential looking memory test (N = 28). Results showed that infants noted the color change, but not the equally salient change in rotation speed. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  3. Anemonefishes rely on visual and chemical cues to correctly identify conspecifics

    Science.gov (United States)

    Johnston, Nicole K.; Dixson, Danielle L.

    2017-09-01

    Organisms rely on sensory cues to interpret their environment and make important life-history decisions. Accurate recognition is of particular importance in diverse reef environments. Most evidence on the use of sensory cues focuses on those used in predator avoidance or habitat recognition, with little information on their role in conspecific recognition. Yet conspecific recognition is essential for life-history decisions including settlement, mate choice, and dominance interactions. Using a sensory manipulated tank and a two-chamber choice flume, anemonefish conspecific response was measured in the presence and absence of chemical and/or visual cues. Experiments were then repeated in the presence or absence of two heterospecific species to evaluate whether a heterospecific fish altered the conspecific response. Anemonefishes responded to both the visual and chemical cues of conspecifics, but relied on the combination of the two cues to recognize conspecifics inside the sensory manipulated tank. These results contrast previous studies focusing on predator detection where anemonefishes were found to compensate for the loss of one sensory cue (chemical) by utilizing a second cue (visual). This lack of sensory compensation may impact the ability of anemonefishes to acclimate to changing reef environments in the future.

  4. The Role of Visual Cues in Microgravity Spatial Orientation

    Science.gov (United States)

    Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.

    2003-01-01

    In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body

  5. Retro-dimension-cue benefit in visual working memory.

    Science.gov (United States)

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-10-24

    In visual working memory (VWM) tasks, participants' performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants' performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis.

  6. Attentional bias to food-related visual cues: is there a role in obesity?

    Science.gov (United States)

    Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M

    2015-02-01

    The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.

  7. The location but not the attributes of visual cues are automatically encoded into working memory.

    Science.gov (United States)

    Chen, Hui; Wyble, Brad

    2015-02-01

    Although it has been well known that visual cues affect the perception of subsequent visual stimuli, relatively little is known about how the cues themselves are processed. The present study attempted to characterize the processing of a visual cue by investigating what information about the cue is stored in terms of both location ("where" is the cue) and attributes ("what" are the attributes of the cue). In 11 experiments subjects performed several trials of reporting a target letter and then answered an unexpected question about the cue (e.g., the location, color, or identity of the cue). This surprise question revealed that participants could report the location of the cue even when the cue never indicated the target location and they were explicitly told to ignore it. Furthermore, the memory trace of this location information endured during encoding of the subsequent target. In contrast to location, attributes of the cue (e.g., color) were poorly reported, even for attributes that were used by subjects to perform the task. These results shed new light on the mechanisms underlying cueing effects and suggest also that the visual system may create empty object files in response to visual cues. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.

    Science.gov (United States)

    Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E

    2017-11-06

    Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age.  Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.

  9. Tactical decisions for changeable cuttlefish camouflage: visual cues for choosing masquerade are relevant from a greater distance than visual cues used for background matching.

    Science.gov (United States)

    Buresch, Kendra C; Ulmer, Kimberly M; Cramer, Corinne; McAnulty, Sarah; Davison, William; Mäthger, Lydia M; Hanlon, Roger T

    2015-10-01

    Cuttlefish use multiple camouflage tactics to evade their predators. Two common tactics are background matching (resembling the background to hinder detection) and masquerade (resembling an uninteresting or inanimate object to impede detection or recognition). We investigated how the distance and orientation of visual stimuli affected the choice of these two camouflage tactics. In the current experiments, cuttlefish were presented with three visual cues: 2D horizontal floor, 2D vertical wall, and 3D object. Each was placed at several distances: directly beneath (in a circle whose diameter was one body length (BL); at zero BL [(0BL); i.e., directly beside, but not beneath the cuttlefish]; at 1BL; and at 2BL. Cuttlefish continued to respond to 3D visual cues from a greater distance than to a horizontal or vertical stimulus. It appears that background matching is chosen when visual cues are relevant only in the immediate benthic surroundings. However, for masquerade, objects located multiple body lengths away remained relevant for choice of camouflage. © 2015 Marine Biological Laboratory.

  10. Saccade frequency response to visual cues during gait in Parkinson's disease: the selective role of attention.

    Science.gov (United States)

    Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn

    2018-04-01

    Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Multimodal cues provide redundant information for bumblebees when the stimulus is visually salient, but facilitate red target detection in a naturalistic background

    Science.gov (United States)

    Corcobado, Guadalupe; Trillo, Alejandro

    2017-01-01

    Our understanding of how floral visitors integrate visual and olfactory cues when seeking food, and how background complexity affects flower detection is limited. Here, we aimed to understand the use of visual and olfactory information for bumblebees (Bombus terrestris terrestris L.) when seeking flowers in a visually complex background. To explore this issue, we first evaluated the effect of flower colour (red and blue), size (8, 16 and 32 mm), scent (presence or absence) and the amount of training on the foraging strategy of bumblebees (accuracy, search time and flight behaviour), considering the visual complexity of our background, to later explore whether experienced bumblebees, previously trained in the presence of scent, can recall and make use of odour information when foraging in the presence of novel visual stimuli carrying a familiar scent. Of all the variables analysed, flower colour had the strongest effect on the foraging strategy. Bumblebees searching for blue flowers were more accurate, flew faster, followed more direct paths between flowers and needed less time to find them, than bumblebees searching for red flowers. In turn, training and the presence of odour helped bees to find inconspicuous (red) flowers. When bees foraged on red flowers, search time increased with flower size; but search time was independent of flower size when bees foraged on blue flowers. Previous experience with floral scent enhances the capacity of detection of a novel colour carrying a familiar scent, probably by elemental association influencing attention. PMID:28898287

  12. Detection of emotional faces: salient physical features guide effective visual search.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  13. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation

    Science.gov (United States)

    Zacharias, G. L.; Young, L. R.

    1981-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  14. Haptic Cues Used for Outdoor Wayfinding by Individuals with Visual Impairments

    Science.gov (United States)

    Koutsoklenis, Athanasios; Papadopoulos, Konstantinos

    2014-01-01

    Introduction: The study presented here examines which haptic cues individuals with visual impairments use more frequently and determines which of these cues are deemed by these individuals to be the most important for way-finding in urban environments. It also investigates the ways in which these haptic cues are used by individuals with visual…

  15. Neurofeedback of visual food cue reactivity: a potential avenue to alter incentive sensitization and craving.

    Science.gov (United States)

    Ihssen, Niklas; Sokunbi, Moses O; Lawrence, Andrew D; Lawrence, Natalia S; Linden, David E J

    2017-06-01

    FMRI-based neurofeedback transforms functional brain activation in real-time into sensory stimuli that participants can use to self-regulate brain responses, which can aid the modification of mental states and behavior. Emerging evidence supports the clinical utility of neurofeedback-guided up-regulation of hypoactive networks. In contrast, down-regulation of hyperactive neural circuits appears more difficult to achieve. There are conditions though, in which down-regulation would be clinically useful, including dysfunctional motivational states elicited by salient reward cues, such as food or drug craving. In this proof-of-concept study, 10 healthy females (mean age = 21.40 years, mean BMI = 23.53) who had fasted for 4 h underwent a novel 'motivational neurofeedback' training in which they learned to down-regulate brain activation during exposure to appetitive food pictures. FMRI feedback was given from individually determined target areas and through decreases/increases in food picture size, thus providing salient motivational consequences in terms of cue approach/avoidance. Our preliminary findings suggest that motivational neurofeedback is associated with functionally specific activation decreases in diverse cortical/subcortical regions, including key motivational areas. There was also preliminary evidence for a reduction of hunger after neurofeedback and an association between down-regulation success and the degree of hunger reduction. Decreasing neural cue responses by motivational neurofeedback may provide a useful extension of existing behavioral methods that aim to modulate cue reactivity. Our pilot findings indicate that reduction of neural cue reactivity is not achieved by top-down regulation but arises in a bottom-up manner, possibly through implicit operant shaping of target area activity.

  16. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    Science.gov (United States)

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  17. Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.

    Science.gov (United States)

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.

  18. Real-Time Lane Detection on Suburban Streets Using Visual Cue Integration

    Directory of Open Access Journals (Sweden)

    Shehan Fernando

    2014-04-01

    Full Text Available The detection of lane boundaries on suburban streets using images obtained from video constitutes a challenging task. This is mainly due to the difficulties associated with estimating the complex geometric structure of lane boundaries, the quality of lane markings as a result of wear, occlusions by traffic, and shadows caused by road-side trees and structures. Most of the existing techniques for lane boundary detection employ a single visual cue and will only work under certain conditions and where there are clear lane markings. Also, better results are achieved when there are no other on-road objects present. This paper extends our previous work and discusses a novel lane boundary detection algorithm specifically addressing the abovementioned issues through the integration of two visual cues. The first visual cue is based on stripe-like features found on lane lines extracted using a two-dimensional symmetric Gabor filter. The second visual cue is based on a texture characteristic determined using the entropy measure of the predefined neighbourhood around a lane boundary line. The visual cues are then integrated using a rule-based classifier which incorporates a modified sequential covering algorithm to improve robustness. To separate lane boundary lines from other similar features, a road mask is generated using road chromaticity values estimated from CIE L*a*b* colour transformation. Extraneous points around lane boundary lines are then removed by an outlier removal procedure based on studentized residuals. The lane boundary lines are then modelled with Bezier spline curves. To validate the algorithm, extensive experimental evaluation was carried out on suburban streets and the results are presented.

  19. Subconscious visual cues during movement execution allow correct online choice reactions

    DEFF Research Database (Denmark)

    Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram

    2012-01-01

    Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived......, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching....... This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations...

  20. Reinforcing Visual Grouping Cues to Communicate Complex Informational Structure.

    Science.gov (United States)

    Bae, Juhee; Watson, Benjamin

    2014-12-01

    In his book Multimedia Learning [7], Richard Mayer asserts that viewers learn best from imagery that provides them with cues to help them organize new information into the correct knowledge structures. Designers have long been exploiting the Gestalt laws of visual grouping to deliver viewers those cues using visual hierarchy, often communicating structures much more complex than the simple organizations studied in psychological research. Unfortunately, designers are largely practical in their work, and have not paused to build a complex theory of structural communication. If we are to build a tool to help novices create effective and well structured visuals, we need a better understanding of how to create them. Our work takes a first step toward addressing this lack, studying how five of the many grouping cues (proximity, color similarity, common region, connectivity, and alignment) can be effectively combined to communicate structured text and imagery from real world examples. To measure the effectiveness of this structural communication, we applied a digital version of card sorting, a method widely used in anthropology and cognitive science to extract cognitive structures. We then used tree edit distance to measure the difference between perceived and communicated structures. Our most significant findings are: 1) with careful design, complex structure can be communicated clearly; 2) communicating complex structure is best done with multiple reinforcing grouping cues; 3) common region (use of containers such as boxes) is particularly effective at communicating structure; and 4) alignment is a weak structural communicator.

  1. Probability cueing of distractor locations: both intertrial facilitation and statistical learning mediate interference reduction.

    Science.gov (United States)

    Goschy, Harriet; Bakos, Sarolta; Müller, Hermann J; Zehetleitner, Michael

    2014-01-01

    Targets in a visual search task are detected faster if they appear in a probable target region as compared to a less probable target region, an effect which has been termed "probability cueing." The present study investigated whether probability cueing cannot only speed up target detection, but also minimize distraction by distractors in probable distractor regions as compared to distractors in less probable distractor regions. To this end, three visual search experiments with a salient, but task-irrelevant, distractor ("additional singleton") were conducted. Experiment 1 demonstrated that observers can utilize uneven spatial distractor distributions to selectively reduce interference by distractors in frequent distractor regions as compared to distractors in rare distractor regions. Experiments 2 and 3 showed that intertrial facilitation, i.e., distractor position repetitions, and statistical learning (independent of distractor position repetitions) both contribute to the probability cueing effect for distractor locations. Taken together, the present results demonstrate that probability cueing of distractor locations has the potential to serve as a strong attentional cue for the shielding of likely distractor locations.

  2. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Generating physical symptoms from visual cues: An experimental study.

    Science.gov (United States)

    Ogden, Jane; Zoukas, Serafim

    2009-12-01

    This experimental study explored whether the physical symptoms of cold, pain and itchiness could be generated by visual cues, whether they varied in the ease with which they could be generated and whether they were related to negative affect. Participants were randomly allocated by group to watch one of three videos relating to cold (e.g. ice, snow, wind), pain (e.g. sporting injuries, tattoos) or itchiness (e.g. head lice, scratching). They then rated their self-reported symptoms of cold, pain and itchiness as well as their negative affect (depression and anxiety). The researcher recorded their observed behaviour relating to these symptoms. The results showed that the interventions were successful and that all three symptoms could be generated by the visual cues in terms of both self-report and observed behaviour. In addition, the pain video generated higher levels of anxiety and depression than the other two videos. Further, the degree of itchiness was related to the degree of anxiety. This symptom onset process also showed variability between symptoms with self-reported cold symptoms being greater than either pain or itchy symptoms. The results show that physical symptoms can be generated by visual cues indicating that psychological factors are not only involved in symptom perception but also in symptom onset.

  4. 'You see?' Teaching and learning how to interpret visual cues during surgery.

    Science.gov (United States)

    Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei

    2015-11-01

    The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of

  5. Automaticity of phasic alertness: Evidence for a three-component model of visual cueing.

    Science.gov (United States)

    Lin, Zhicheng; Lu, Zhong-Lin

    2016-10-01

    The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue-double cue-is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue-single cue-that is being mixed (80 % vs. 50 % valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, then, top-down influences-in the form of contextual relevance and cue awareness-can have opposite influences on the cueing effect from the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention-orienting, alerting, and inhibition-to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital.

  6. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    Science.gov (United States)

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  7. Retrospective cues based on object features improve visual working memory performance in older adults.

    Science.gov (United States)

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  8. The reliability of retro-cues determines the fate of noncued visual working memory representations

    NARCIS (Netherlands)

    Günseli, E.; van Moorselaar, D.; Meeter, M.; Olivers, C.N.L.

    2015-01-01

    Retrospectively cueing an item retained in visual working memory during maintenance is known to improve its retention. However, studies have provided conflicting results regarding the costs of such retro-cues for the noncued items, leading to different theories on the mechanisms behind visual

  9. Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.

    Science.gov (United States)

    Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu

    2015-09-30

    Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.

  10. Visual attention to food cues in obesity: an eye-tracking study.

    Science.gov (United States)

    Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M

    2014-12-01

    Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.

  11. Food and conspecific chemical cues modify visual behavior of zebrafish, Danio rerio.

    Science.gov (United States)

    Stephenson, Jessica F; Partridge, Julian C; Whitlock, Kathleen E

    2012-06-01

    Animals use the different qualities of olfactory and visual sensory information to make decisions. Ethological and electrophysiological evidence suggests that there is cross-modal priming between these sensory systems in fish. We present the first experimental study showing that ecologically relevant chemical mixtures alter visual behavior, using adult male and female zebrafish, Danio rerio. Neutral-density filters were used to attenuate the light reaching the tank to an initial light intensity of 2.3×10(16) photons/s/m2. Fish were exposed to food cue and to alarm cue. The light intensity was then increased by the removal of one layer of filter (nominal absorbance 0.3) every minute until, after 10 minutes, the light level was 15.5×10(16) photons/s/m2. Adult male and female zebrafish responded to a moving visual stimulus at lower light levels if they had been first exposed to food cue, or to conspecific alarm cue. These results suggest the need for more integrative studies of sensory biology.

  12. Interplay of Gravicentric, Egocentric, and Visual Cues About the Vertical in the Control of Arm Movement Direction.

    Science.gov (United States)

    Bock, Otmar; Bury, Nils

    2018-03-01

    Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.

  13. The footprints of visual attention during search with 100% valid and 100% invalid cues.

    Science.gov (United States)

    Eckstein, Miguel P; Pham, Binh T; Shimozaki, Steven S

    2004-06-01

    Human performance during visual search typically improves when spatial cues indicate the possible target locations. In many instances, the performance improvement is quantitatively predicted by a Bayesian or quasi-Bayesian observer in which visual attention simply selects the information at the cued locations without changing the quality of processing or sensitivity and ignores the information at the uncued locations. Aside from the general good agreement between the effect of the cue on model and human performance, there has been little independent confirmation that humans are effectively selecting the relevant information. In this study, we used the classification image technique to assess the effectiveness of spatial cues in the attentional selection of relevant locations and suppression of irrelevant locations indicated by spatial cues. Observers searched for a bright target among dimmer distractors that might appear (with 50% probability) in one of eight locations in visual white noise. The possible target location was indicated using a 100% valid box cue or seven 100% invalid box cues in which the only potential target locations was uncued. For both conditions, we found statistically significant perceptual templates shaped as differences of Gaussians at the relevant locations with no perceptual templates at the irrelevant locations. We did not find statistical significant differences between the shapes of the inferred perceptual templates for the 100% valid and 100% invalid cues conditions. The results confirm the idea that during search visual attention allows the observer to effectively select relevant information and ignore irrelevant information. The results for the 100% invalid cues condition suggests that the selection process is not drawn automatically to the cue but can be under the observers' voluntary control.

  14. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  15. Memory for location and visual cues in white-eared hummingbirds Hylocharis leucotis

    Directory of Open Access Journals (Sweden)

    Guillermo PÉREZ, Carlos LARA, José VICCON-PALE, Martha SIGNORET-POILLON

    2011-08-01

    Full Text Available In nature hummingbirds face floral resources whose availability, quality and quantity can vary spatially and temporally. Thus, they must constantly make foraging decisions about which patches, plants and flowers to visit, partly as a function of the nectar reward. The uncertainty of these decisions would possibly be reduced if an individual could remember locations or use visual cues to avoid revisiting recently depleted flowers. In the present study, we carried out field experiments with white-eared hummingbirds Hylocharis leucotis, to evaluate their use of locations or visual cues when foraging on natural flowers Penstemon roseus. We evaluated the use of spatial memory by observing birds while they were foraging between two plants and within a single plant. Our results showed that hummingbirds prefer to use location when foraging in two plants, but they also use visual cues to efficiently locate unvisited rewarded flowers when they feed on a single plant. However, in absence of visual cues, in both experiments birds mainly used the location of previously visited flowers to make subsequent visits. Our data suggest that hummingbirds are capable of learning and employing this flexibility depending on the faced environmental conditions and the information acquired in previous visits [Current Zoology 57 (4: 468–476, 2011].

  16. Red to Green or Fast to Slow? Infants' Visual Working Memory for "Just Salient Differences"

    Science.gov (United States)

    Kaldy, Zsuzsa; Blaser, Erik

    2013-01-01

    In this study, 6-month-old infants' visual working memory for a static feature (color) and a dynamic feature (rotational motion) was compared. Comparing infants' use of different features can only be done properly if experimental manipulations to those features are equally salient (Kaldy & Blaser, 2009; Kaldy, Blaser, & Leslie,…

  17. Visual Attention to Print-Salient and Picture-Salient Environmental Print in Young Children

    Science.gov (United States)

    Neumann, Michelle M.; Summerfield, Katelyn; Neumann, David L.

    2015-01-01

    Environmental print is composed of words and contextual cues such as logos and pictures. The salience of the contextual cues may influence attention to words and thus the potential of environmental print in promoting early reading development. The present study explored this by presenting pre-readers (n = 20) and beginning readers (n = 16) with…

  18. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  19. Deaf children's use of clear visual cues in mindreading.

    Science.gov (United States)

    Hao, Jian; Su, Yanjie

    2014-11-01

    Previous studies show that typically developing 4-year old children can understand other people's false beliefs but that deaf children of hearing families have difficulty in understanding false beliefs until the age of approximately 13. Because false beliefs are implicit mental states that are not expressed through clear visual cues in standard false belief tasks, the present study examines the hypothesis that the deaf children's developmental delay in understanding false beliefs may reflect their difficulty in understanding a spectrum of mental states that are not expressed through clear visual cues. Nine- to 13-year-old deaf children of hearing families and 4-6-year-old typically developing children completed false belief tasks and emotion recognition tasks under different cue conditions. The results indicated that after controlling for the effect of the children's language abilities, the deaf children inferred other people's false beliefs as accurately as the typically developing children when other people's false beliefs were clearly expressed through their eye-gaze direction. However, the deaf children performed worse than the typically developing children when asked to infer false beliefs with ambiguous or no eye-gaze cues. Moreover, the deaf children were capable of recognizing other people's emotions that were clearly conveyed by their facial or body expressions. The results suggest that although theory-based or simulation-based mental state understanding is typical of hearing children's theory of mind mechanism, for deaf children of hearing families, clear cue-based mental state understanding may be their specific theory of mind mechanism. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. How hearing aids, background noise, and visual cues influence objective listening effort.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2013-09-01

    The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners' working memory capacity, verbal processing speed, or lipreading skill. Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No

  1. Salient region detection by fusing bottom-up and top-down features extracted from a single image.

    Science.gov (United States)

    Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng

    2014-10-01

    Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.

  2. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    Science.gov (United States)

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  3. Perceptual stimulus-A Bayesian-based integration of multi-visual-cue approach and its application

    Institute of Scientific and Technical Information of China (English)

    XUE JianRu; ZHENG NanNing; ZHONG XiaoPin; PING LinJiang

    2008-01-01

    With the view that visual cue could be taken as a kind of stimulus, the study of the mechanism in the visual perception process by using visual cues in their probabilistic representation eventually leads to a class of statistical integration of multiple visual cues (IMVC) methods which have been applied widely in perceptual grouping, video analysis, and other basic problems in computer vision. In this paper, a survey on the basic ideas and recent advances of IMVC methods is presented, and much focus is on the models and algorithms of IMVC for video analysis within the framework of Bayesian estimation. Furthermore, two typical problems in video analysis, robust visual tracking and "switching problem" in multi-target tracking (MTT) are taken as test beds to verify a series of Bayesian-based IMVC methods proposed by the authors. Furthermore, the relations between the statistical IMVC and the visual per-ception process, as well as potential future research work for IMVC, are discussed.

  4. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    Science.gov (United States)

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. The impact of age, ongoing task difficulty, and cue salience on preschoolers' prospective memory performance: the role of executive function.

    Science.gov (United States)

    Mahy, Caitlin E V; Moses, Louis J; Kliegel, Matthias

    2014-11-01

    The current study examined the impact of age, ongoing task (OT) difficulty, and cue salience on 4- and 5-year-old children's prospective memory (PM) and also explored the relation between individual differences in executive function (working memory, inhibition, and shifting) and PM. OT difficulty and cue salience are predicted to affect the detection of PM cues based on the multiprocess framework, yet neither has been thoroughly investigated in young children. OT difficulty was manipulated by requiring children to sort cards according to the size of pictured items (easy) or by opposite size (difficult), and cue salience was manipulated by placing a red border around half of the target cues (salient) and no border around the other cues (non-salient). The 5-year-olds outperformed the 4-year-olds on the PM task, and salient PM cues resulted in better PM cues compared with non-salient cues. There was no main effect of OT difficulty, and the interaction between cue salience and OT difficulty was not significant. However, a planned comparison revealed that the combination of non-salient cues and a difficult OT resulted in significantly worse PM performance than that in all of the other conditions. Inhibition accounted for significant variance in PM performance for non-salient cues and for marginally significant variance for salient cues. Furthermore, individual differences in inhibition fully mediated the effect of age on PM performance. Results are discussed in the context of the multiprocess framework and with reference to preschoolers' difficulty with the executive demands of dividing attention between the OT and PM task. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Dementias show differential physiological responses to salient sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching ("looming") or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  7. Cannabis cue-induced brain activation correlates with drug craving in limbic and visual salience regions: Preliminary results

    Science.gov (United States)

    Charboneau, Evonne J.; Dietrich, Mary S.; Park, Sohee; Cao, Aize; Watkins, Tristan J; Blackford, Jennifer U; Benningfield, Margaret M.; Martin, Peter R.; Buchowski, Maciej S.; Cowan, Ronald L.

    2013-01-01

    Craving is a major motivator underlying drug use and relapse but the neural correlates of cannabis craving are not well understood. This study sought to determine whether visual cannabis cues increase cannabis craving and whether cue-induced craving is associated with regional brain activation in cannabis-dependent individuals. Cannabis craving was assessed in 16 cannabis-dependent adult volunteers while they viewed cannabis cues during a functional MRI (fMRI) scan. The Marijuana Craving Questionnaire was administered immediately before and after each of three cannabis cue-exposure fMRI runs. FMRI blood-oxygenation-level-dependent (BOLD) signal intensity was determined in regions activated by cannabis cues to examine the relationship of regional brain activation to cannabis craving. Craving scores increased significantly following exposure to visual cannabis cues. Visual cues activated multiple brain regions, including inferior orbital frontal cortex, posterior cingulate gyrus, parahippocampal gyrus, hippocampus, amygdala, superior temporal pole, and occipital cortex. Craving scores at baseline and at the end of all three runs were significantly correlated with brain activation during the first fMRI run only, in the limbic system (including amygdala and hippocampus) and paralimbic system (superior temporal pole), and visual regions (occipital cortex). Cannabis cues increased craving in cannabis-dependent individuals and this increase was associated with activation in the limbic, paralimbic, and visual systems during the first fMRI run, but not subsequent fMRI runs. These results suggest that these regions may mediate visually cued aspects of drug craving. This study provides preliminary evidence for the neural basis of cue-induced cannabis craving and suggests possible neural targets for interventions targeted at treating cannabis dependence. PMID:24035535

  8. Event-related potentials reveal increased distraction by salient global objects in older adults

    DEFF Research Database (Denmark)

    Wiegand, Iris; Finke, Kathrin; Töllner, Thomas

    Age-related changes in visual functions influence how older individuals perceive and react upon objects in their environment. In particular, older individuals might be more distracted by highly salient, irrelevant information. Kanizsa figures induce a ‘global precedence’ effect, which reflects...... a processing advantage for salient whole-object representations relative to configurations of local elements not inducing a global form. We investigated event-related potential (ERP) correlates of age-related decline in visual abilities, and specifically, distractibility by salient global objects in visual...

  9. Are multiple visual short-term memory storages necessary to explain the retro-cue effect?

    Science.gov (United States)

    Makovski, Tal

    2012-06-01

    Recent research has shown that change detection performance is enhanced when, during the retention interval, attention is cued to the location of the upcoming test item. This retro-cue advantage has led some researchers to suggest that visual short-term memory (VSTM) is divided into a durable, limited-capacity storage and a more fragile, high-capacity storage. Consequently, performance is poor on the no-cue trials because fragile VSTM is overwritten by the test display and only durable VSTM is accessible under these conditions. In contrast, performance is improved in the retro-cue condition because attention keeps fragile VSTM accessible. The aim of the present study was to test the assumptions underlying this two-storage account. Participants were asked to encode an array of colors for a change detection task involving no-cue and retro-cue trials. A retro-cue advantage was found even when the cue was presented after a visual (Experiment 1) or a central (Experiment 2) interference. Furthermore, the magnitude of the interference was comparable between the no-cue and retro-cue trials. These data undermine the main empirical support for the two-storage account and suggest that the presence of a retro-cue benefit cannot be used to differentiate between different VSTM storages.

  10. Numerosity estimation in visual stimuli in the absence of luminance-based cues.

    Directory of Open Access Journals (Sweden)

    Peter Kramer

    2011-02-01

    Full Text Available Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills. It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like-in visual stimuli-spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself.Here we introduce a novel method, based on second-order (contrast-based visual motion, to create stimuli that exclude all first-order (luminance-based cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion.The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation.

  11. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    Science.gov (United States)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  12. Task-specific visual cues for improving process model understanding

    NARCIS (Netherlands)

    Petrusel, Razvan; Mendling, Jan; Reijers, Hajo A.

    2016-01-01

    Context Business process models support various stakeholders in managing business processes and designing process-aware information systems. In order to make effective use of these models, they have to be readily understandable. Objective Prior research has emphasized the potential of visual cues to

  13. The role of temporal synchrony as a binding cue for visual persistence in early visual areas: an fMRI study.

    Science.gov (United States)

    Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis

    2009-12-01

    We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.

  14. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  15. Dementias show differential physiological responses to salient sounds

    Directory of Open Access Journals (Sweden)

    Phillip David Fletcher

    2015-03-01

    Full Text Available Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (‘looming’ or less salient withdrawing sounds. Pupil dilatation responses and behavioural rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n=10; behavioural variant frontotemporal dementia, n=16, progressive non-fluent aphasia, n=12; amnestic Alzheimer’s disease, n=10 and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioural response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer’s disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  16. Dementias show differential physiological responses to salient sounds

    Science.gov (United States)

    Fletcher, Phillip D.; Nicholas, Jennifer M.; Shakespeare, Timothy J.; Downey, Laura E.; Golden, Hannah L.; Agustus, Jennifer L.; Clark, Camilla N.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (“looming”) or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases. PMID:25859194

  17. Auditory and visual cueing modulate cycling speed of older adults and persons with Parkinson's disease in a Virtual Cycling (V-Cycle) system.

    Science.gov (United States)

    Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E

    2016-08-19

    Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p auditory (F = 24.72, p visual cueing (F = 40.69, p visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p visual cues (p visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.

  18. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    Directory of Open Access Journals (Sweden)

    Fabian Draht

    2017-06-01

    Full Text Available Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  19. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    Science.gov (United States)

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  20. [Visual cues as a therapeutic tool in Parkinson's disease. A systematic review].

    Science.gov (United States)

    Muñoz-Hellín, Elena; Cano-de-la-Cuerda, Roberto; Miangolarra-Page, Juan Carlos

    2013-01-01

    Sensory stimuli or sensory cues are being used as a therapeutic tool for improving gait disorders in Parkinson's disease patients, but most studies seem to focus on auditory stimuli. The aim of this study was to conduct a systematic review regarding the use of visual cues over gait disorders, dual tasks during gait, freezing and the incidence of falls in patients with Parkinson to obtain therapeutic implications. We conducted a systematic review in main databases such as Cochrane Database of Systematic Reviews, TripDataBase, PubMed, Ovid MEDLINE, Ovid EMBASE and Physiotherapy Evidence Database, during 2005 to 2012, according to the recommendations of the Consolidated Standards of Reporting Trials, evaluating the quality of the papers included with the Downs & Black Quality Index. 21 articles were finally included in this systematic review (with a total of 892 participants) with variable methodological quality, achieving an average of 17.27 points in the Downs and Black Quality Index (range: 11-21). Visual cues produce improvements over temporal-spatial parameters in gait, turning execution, reducing the appearance of freezing and falls in Parkinson's disease patients. Visual cues appear to benefit dual tasks during gait, reducing the interference of the second task. Further studies are needed to determine the preferred type of stimuli for each stage of the disease. Copyright © 2012 SEGG. Published by Elsevier Espana. All rights reserved.

  1. Visual and Proprioceptive Cue Weighting in Children with Developmental Coordination Disorder, Autism Spectrum Disorder and Typical Development

    Directory of Open Access Journals (Sweden)

    L Miller

    2013-10-01

    Full Text Available Accurate movement of the body and the perception of the body's position in space usually rely on both visual and proprioceptive cues. These cues are weighted differently depending on task, visual conditions and neurological factors. Children with Developmental Coordination Disorder (DCD and often also children with Autism Spectrum Disorder (ASD have movement deficits, and there is evidence that cue weightings may differ between these groups. It is often reported that ASD is linked to an increased reliance on proprioceptive information at the expense of visual information (Haswell et al, 2009; Gepner et al, 1995. The inverse appears to be true for DCD (Wann et al, 1998; Biancotto et al, 2011. I will report experiments comparing, for the first time, relative weightings of visual and proprioceptive information in children aged 8-14 with ASD, DCD and typical development. Children completed the Movement Assessment Battery for Children (MABC-II to assess motor ability and a visual-proprioceptive matching task to assess relative cue weighting. Results from the movement battery provided evidence for movement deficits in ASD similar to those in DCD. Cue weightings in the matching task did not differentiate the clinical groups, however those children with ASD with relatively spared movement skills tended to weight visual cues less heavily than those with DCD-like movement deficits. These findings will be discussed with reference to previous DSM-IV diagnostic criteria and also relevant revisions in the DSM-V.

  2. Listeners' expectation of room acoustical parameters based on visual cues

    Science.gov (United States)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer

  3. Integration of visual and inertial cues in perceived heading of self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Weesie, H.M.; Werkhoven, P.J.; Groen, E.L.

    2010-01-01

    In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants

  4. Gait parameter control timing with dynamic manual contact or visual cues

    Science.gov (United States)

    Shi, Peter; Werner, William

    2016-01-01

    We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms

  5. Gait parameter control timing with dynamic manual contact or visual cues.

    Science.gov (United States)

    Rabin, Ely; Shi, Peter; Werner, William

    2016-06-01

    We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms

  6. Peripheral Visual Cues: Their Fate in Processing and Effects on Attention and Temporal-Order Perception.

    Science.gov (United States)

    Tünnermann, Jan; Scharlau, Ingrid

    2016-01-01

    Peripheral visual cues lead to large shifts in psychometric distributions of temporal-order judgments. In one view, such shifts are attributed to attention speeding up processing of the cued stimulus, so-called prior entry. However, sometimes these shifts are so large that it is unlikely that they are caused by attention alone. Here we tested the prevalent alternative explanation that the cue is sometimes confused with the target on a perceptual level, bolstering the shift of the psychometric function. We applied a novel model of cued temporal-order judgments, derived from Bundesen's Theory of Visual Attention. We found that cue-target confusions indeed contribute to shifting psychometric functions. However, cue-induced changes in the processing rates of the target stimuli play an important role, too. At smaller cueing intervals, the cue increased the processing speed of the target. At larger intervals, inhibition of return was predominant. Earlier studies of cued TOJs were insensitive to these effects because in psychometric distributions they are concealed by the conjoint effects of cue-target confusions and processing rate changes.

  7. Visual attention and the apprehension of spatial relations: the case of depth.

    Science.gov (United States)

    Moore, C M; Elsinger, C L; Lleras, A

    2001-05-01

    Several studies have shown that targets defined on the basis of the spatial relations between objects yield highly inefficient visual search performance (e.g., Logan, 1994; Palmer, 1994), suggesting that the apprehension of spatial relations may require the selective allocation of attention within the scene. In the present study, we tested the hypothesis that depth relations might be different in this regard and might support efficient visual search. This hypothesis was based, in part, on the fact that many perceptual organization processes that are believed to occur early and in parallel, such as figure-ground segregation and perceptual completion, seem to depend on the assignment of depth relations. Despite this, however, using increasingly salient cues to depth (Experiments 2-4) and including a separate test of the sufficiency of the most salient depth cue used (Experiment 5), no evidence was found to indicate that search for a target defined by depth relations is any different than search for a target defined by other types of spatial relations, with regard to efficiency of search. These findings are discussed within the context of the larger literature on early processing of three-dimensional characteristics of visual scenes.

  8. ACTION RECOGNITION USING SALIENT NEIGHBORING HISTOGRAMS

    DEFF Research Database (Denmark)

    Ren, Huamin; Moeslund, Thomas B.

    2013-01-01

    Combining spatio-temporal interest points with Bag-of-Words models achieves state-of-the-art performance in action recognition. However, existing methods based on “bag-ofwords” models either are too local to capture the variance in space/time or fail to solve the ambiguity problem in spatial...... and temporal dimensions. Instead, we propose a salient vocabulary construction algorithm to select visual words from a global point of view, and form compact descriptors to represent discriminative histograms in the neighborhoods. Those salient neighboring histograms are then trained to model different actions...

  9. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  10. Forgotten but not gone: Retro-cue costs and benefits in a double-cueing paradigm suggest multiple states in visual short-term memory

    NARCIS (Netherlands)

    van Moorselaar, D.; Olivers, C.N.L.; Theeuwes, J.; Lamme, V.A.F.; Sligte, I.G.

    2015-01-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we

  11. Forgotten but not gone: retro-cue cost and benefits in a double-cueing paradigm suggest multiple states in visual short-term memory

    NARCIS (Netherlands)

    van Moorselaar, D.; Olivers, C.N.L.; Theeuwes, J.; Lamme, V.A.F.; Sligte, I.G.

    2015-01-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we

  12. Non-conscious visual cues related to affect and action alter perception of effort and endurance performance

    Directory of Open Access Journals (Sweden)

    Anthony William Blanchfield

    2014-12-01

    Full Text Available The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effect of these non-conscious visual cues on effort and performance during physical tasks is however unknown. We report two experiments investigating the effect of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces revealed that individuals cycled for significantly longer (178 s, p = .04 when subliminally primed with happy faces. A 2 x 5 (condition x iso-time ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE during the time to exhaustion (TTE test with lower RPE when subjects were subliminally primed with happy faces (p = .04. In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer (399 s, p = .04 TTE in comparison to inaction words (p = .04. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = .03. These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise.

  13. Cueing and Anxiety in a Visual Concept Learning Task.

    Science.gov (United States)

    Turner, Philip M.

    This study investigated the relationship of two anxiety measures (the State-Trait Anxiety Inventory-Trait Form and the S-R Inventory of Anxiousness-Exam Form) to performance on a visual concept-learning task with embedded criterial information. The effect on anxiety reduction of cueing criterial information was also examined, and two levels of…

  14. Effect of visual cues on the resolution of perceptual ambiguity in Parkinson's disease and normal aging.

    Science.gov (United States)

    Díaz-Santos, Mirella; Cao, Bo; Mauro, Samantha A; Yazdanbakhsh, Arash; Neargarder, Sandy; Cronin-Golomb, Alice

    2015-02-01

    Parkinson's disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity.

  15. Acute and Chronic Effect of Acoustic and Visual Cues on Gait Training in Parkinson’s Disease: A Randomized, Controlled Study

    Directory of Open Access Journals (Sweden)

    Roberto De Icco

    2015-01-01

    Full Text Available In this randomized controlled study we analyse and compare the acute and chronic effects of visual and acoustic cues on gait performance in Parkinson’s Disease (PD. We enrolled 46 patients with idiopathic PD who were assigned to 3 different modalities of gait training: (1 use of acoustic cues, (2 use of visual cues, or (3 overground training without cues. All patients were tested with kinematic analysis of gait at baseline (T0, at the end of the 4-week rehabilitation programme (T1, and 3 months later (T2. Regarding the acute effect, acoustic cues increased stride length and stride duration, while visual cues reduced the number of strides and normalized the stride/stance distribution but also reduced gait speed. As regards the chronic effect of cues, we recorded an improvement in some gait parameters in all 3 groups of patients: all 3 types of training improved gait speed; visual cues also normalized the stance/swing ratio, acoustic cues reduced the number of strides and increased stride length, and overground training improved stride length. The changes were not retained at T2 in any of the experimental groups. Our findings support and characterize the usefulness of cueing strategies in the rehabilitation of gait in PD.

  16. Self-construal differences in neural responses to negative social cues.

    Science.gov (United States)

    Liddell, Belinda J; Felmingham, Kim L; Das, Pritha; Whitford, Thomas J; Malhi, Gin S; Battaglini, Eva; Bryant, Richard A

    2017-10-01

    Cultures differ substantially in representations of the self. Whereas individualistic cultural groups emphasize an independent self, reflected in processing biases towards centralized salient objects, collectivistic cultures are oriented towards an interdependent self, attending to contextual associations between visual cues. It is unknown how these perceptual biases may affect brain activity in response to negative social cues. Moreover, while some studies have shown that individual differences in self-construal moderate cultural group comparisons, few have examined self-construal differences separate to culture. To investigate these issues, a final sample of a group of healthy participants high in trait levels of collectivistic self-construal (n=16) and individualistic self-construal (n=19), regardless of cultural background, completed a negative social cue evaluation task designed to engage face/object vs context-specific neural processes whilst undergoing fMRI scanning. Between-group analyses revealed that the collectivistic group exclusively engaged the parahippocampal gyrus (parahippocampal place area) - a region critical to contextual integration - during negative face processing - suggesting compensatory activations when contextual information was missing. The collectivist group also displayed enhanced negative context dependent brain activity involving the left superior occipital gyrus/cuneus and right anterior insula. By contrast, the individualistic group did not engage object or localized face processing regions as predicted, but rather demonstrated heightened appraisal and self-referential activations in medial prefrontal and temporoparietal regions to negative contexts - again suggesting compensatory processes when focal cues were absent. While individualists also appeared more sensitive to negative faces in the scenes, activating the right middle cingulate gyrus, dorsal prefrontal and parietal activations, this activity was observed relative to the

  17. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    Science.gov (United States)

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-11-04

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.

  18. Cue competition affects temporal dynamics of edge-assignment in human visual cortex.

    Science.gov (United States)

    Brooks, Joseph L; Palmer, Stephen E

    2011-03-01

    Edge-assignment determines the perception of relative depth across an edge and the shape of the closer side. Many cues determine edge-assignment, but relatively little is known about the neural mechanisms involved in combining these cues. Here, we manipulated extremal edge and attention cues to bias edge-assignment such that these two cues either cooperated or competed. To index their neural representations, we flickered figure and ground regions at different frequencies and measured the corresponding steady-state visual-evoked potentials (SSVEPs). Figural regions had stronger SSVEP responses than ground regions, independent of whether they were attended or unattended. In addition, competition and cooperation between the two edge-assignment cues significantly affected the temporal dynamics of edge-assignment processes. The figural SSVEP response peaked earlier when the cues causing it cooperated than when they competed, but sustained edge-assignment effects were equivalent for cooperating and competing cues, consistent with a winner-take-all outcome. These results provide physiological evidence that figure-ground organization involves competitive processes that can affect the latency of figural assignment.

  19. Cueing spatial attention through timing and probability.

    Science.gov (United States)

    Girardi, Giovanna; Antonucci, Gabriella; Nico, Daniele

    2013-01-01

    Even when focused on an effortful task we retain the ability to detect salient environmental information, and even irrelevant visual stimuli can be automatically detected. However, to which extent unattended information affects attentional control is not fully understood. Here we provide evidences of how the brain spontaneously organizes its cognitive resources by shifting attention between a selective-attending and a stimulus-driven modality within a single task. Using a spatial cueing paradigm we investigated the effect of cue-target asynchronies as a function of their probabilities of occurrence (i.e., relative frequency). Results show that this accessory information modulates attentional shifts. A valid spatial cue improved participants' performance as compared to an invalid one only in trials in which target onset was highly predictable because of its more robust occurrence. Conversely, cuing proved ineffective when spatial cue and target were associated according to a less frequent asynchrony. These patterns of response depended on asynchronies' probability and not on their duration. Our findings clearly demonstrate that through a fine decision-making, performed trial-by-trial, the brain utilizes implicit information to decide whether or not voluntarily shifting spatial attention. As if according to a cost-planning strategy, the cognitive effort of shifting attention depending on the cue is performed only when the expected advantages are higher. In a trade-off competition for cognitive resources, voluntary/automatic attending may thus be a more complex process than expected. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. The language used in describing autobiographical memories prompted by life period visually presented verbal cues, event-specific visually presented verbal cues and short musical clips of popular music.

    Science.gov (United States)

    Zator, Krysten; Katz, Albert N

    2017-07-01

    Here, we examined linguistic differences in the reports of memories produced by three cueing methods. Two groups of young adults were cued visually either by words representing events or popular cultural phenomena that took place when they were 5, 10, or 16 years of age, or by words referencing a general lifetime period word cue directing them to that period in their life. A third group heard 30-second long musical clips of songs popular during the same three time periods. In each condition, participants typed a specific event memory evoked by the cue and these typed memories were subjected to analysis by the Linguistic Inquiry and Word Count (LIWC) program. Differences in the reports produced indicated that listening to music evoked memories embodied in motor-perceptual systems more so than memories evoked by our word-cueing conditions. Additionally, relative to music cues, lifetime period word cues produced memories with reliably more uses of personal pronouns, past tense terms, and negative emotions. The findings provide evidence for the embodiment of autobiographical memories, and how those differ when the cues emphasise different aspects of the encoded events.

  1. Forgotten but Not Gone: Retro-Cue Costs and Benefits in a Double-Cueing Paradigm Suggest Multiple States in Visual Short-Term Memory

    Science.gov (United States)

    van Moorselaar, Dirk; Olivers, Christian N. L.; Theeuwes, Jan; Lamme, Victor A. F.; Sligte, Ilja G.

    2015-01-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM…

  2. Combined Electrophysiological and Behavioral Evidence for the Suppression of Salient Distractors.

    Science.gov (United States)

    Gaspelin, Nicholas; Luck, Steven J

    2018-05-15

    Researchers have long debated how salient-but-irrelevant features guide visual attention. Pure stimulus-driven theories claim that salient stimuli automatically capture attention irrespective of goals, whereas pure goal-driven theories propose that an individual's attentional control settings determine whether salient stimuli capture attention. However, recent studies have suggested a hybrid model in which salient stimuli attract visual attention but can be actively suppressed by top-down attentional mechanisms. Support for this hybrid model has primarily come from ERP studies demonstrating that salient stimuli, which fail to capture attention, also elicit a distractor positivity (P D ) component, a putative neural index of suppression. Other support comes from a handful of behavioral studies showing that processing at the salient locations is inhibited compared with other locations. The current study was designed to link the behavioral and neural evidence by combining ERP recordings with an experimental paradigm that provides a behavioral measure of suppression. We found that, when a salient distractor item elicited the P D component, processing at the location of this distractor was suppressed below baseline levels. Furthermore, the magnitude of behavioral suppression and the magnitude of the P D component covaried across participants. These findings provide a crucial connection between the behavioral and neural measures of suppression, which opens the door to using the P D component to assess the timing and neural substrates of the behaviorally observed suppression.

  3. Salient man-made structure detection in infrared images

    Science.gov (United States)

    Li, Dong-jie; Zhou, Fu-gen; Jin, Ting

    2013-09-01

    Target detection, segmentation and recognition is a hot research topic in the field of image processing and pattern recognition nowadays, among which salient area or object detection is one of core technologies of precision guided weapon. Many theories have been raised in this paper; we detect salient objects in a series of input infrared images by using the classical feature integration theory and Itti's visual attention system. In order to find the salient object in an image accurately, we present a new method to solve the edge blur problem by calculating and using the edge mask. We also greatly improve the computing speed by improving the center-surround differences method. Unlike the traditional algorithm, we calculate the center-surround differences through rows and columns separately. Experimental results show that our method is effective in detecting salient object accurately and rapidly.

  4. Distance-dependent pattern blending can camouflage salient aposematic signals.

    Science.gov (United States)

    Barnett, James B; Cuthill, Innes C; Scott-Samuel, Nicholas E

    2017-07-12

    The effect of viewing distance on the perception of visual texture is well known: spatial frequencies higher than the resolution limit of an observer's visual system will be summed and perceived as a single combined colour. In animal defensive colour patterns, distance-dependent pattern blending may allow aposematic patterns, salient at close range, to match the background to distant observers. Indeed, recent research has indicated that reducing the distance from which a salient signal can be detected can increase survival over camouflage or conspicuous aposematism alone. We investigated whether the spatial frequency of conspicuous and cryptically coloured stripes affects the rate of avian predation. Our results are consistent with pattern blending acting to camouflage salient aposematic signals effectively at a distance. Experiments into the relative rate of avian predation on edible model caterpillars found that increasing spatial frequency (thinner stripes) increased survival. Similarly, visual modelling of avian predators showed that pattern blending increased the similarity between caterpillar and background. These results show how a colour pattern can be tuned to reveal or conceal different information at different distances, and produce tangible survival benefits. © 2017 The Author(s).

  5. The identification and modeling of visual cue usage in manual control task experiments

    Science.gov (United States)

    Sweet, Barbara Townsend

    Many fields of endeavor require humans to conduct manual control tasks while viewing a perspective scene. Manual control refers to tasks in which continuous, or nearly continuous, control adjustments are required. Examples include flying an aircraft, driving a car, and riding a bicycle. Perspective scenes can arise through natural viewing of the world, simulation of a scene (as in flight simulators), or through imaging devices (such as the cameras on an unmanned aerospace vehicle). Designers frequently have some degree of control over the content and characteristics of a perspective scene; airport designers can choose runway markings, vehicle designers can influence the size and shape of windows, as well as the location of the pilot, and simulator database designers can choose scene complexity and content. Little theoretical framework exists to help designers determine the answers to questions related to perspective scene content. An empirical approach is most commonly used to determine optimum perspective scene configurations. The goal of the research effort described in this dissertation has been to provide a tool for modeling the characteristics of human operators conducting manual control tasks with perspective-scene viewing. This is done for the purpose of providing an algorithmic, as opposed to empirical, method for analyzing the effects of changing perspective scene content for closed-loop manual control tasks. The dissertation contains the development of a model of manual control using a perspective scene, called the Visual Cue Control (VCC) Model. Two forms of model were developed: one model presumed that the operator obtained both position and velocity information from one visual cue, and the other model presumed that the operator used one visual cue for position, and another for velocity. The models were compared and validated in two experiments. The results show that the two-cue VCC model accurately characterizes the output of the human operator with a

  6. Effect of Performing a Boundary-Avoidance Tracking Task on the Perception of Coherence Between Visual and Inertial Cues

    NARCIS (Netherlands)

    Valente Pais, A.R.; Van Paassen, M.M.; Mulder, M.; Wentink, M.

    2011-01-01

    During flight simulation, the inertial and visual stimuli provided to the pilot differ considerably. For successful design of motion cueing algorithms it is necessary to gather knowledge on how pilots perceive the difference between visual and inertial cues. Some of the work done on this topic has

  7. Do cattle (Bos taurus) retain an association of a visual cue with a food reward for a year?

    Science.gov (United States)

    Hirata, Masahiko; Takeno, Nozomi

    2014-06-01

    Use of visual cues to locate specific food resources from a distance is a critical ability of animals foraging in a spatially heterogeneous environment. However, relatively little is known about how long animals can retain the learned cue-reward association without reinforcement. We compared feeding behavior of experienced and naive Japanese Black cows (Bos taurus) in discovering food locations in a pasture. Experienced animals had been trained to respond to a visual cue (plastic washtub) for a preferred food (grain-based concentrate) 1 year prior to the experiment, while naive animals had no exposure to the cue. Cows were tested individually in a test arena including tubs filled with the concentrate on three successive days (Days 1-3). Experienced cows located the first tub more quickly and visited more tubs than naive cows on Day 1 (usually P visual cue with a food reward within a day and retain the association for 1 year despite a slight decay. © 2014 Japanese Society of Animal Science.

  8. Retrospective Cues Based on Object Features Improve Visual Working Memory Performance in Older Adults

    OpenAIRE

    Gilchrist, Amanda L.; Duarte, Audrey; Verhaeghen, Paul

    2015-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were either presented with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an u...

  9. All I saw was the cake. Hunger effects on attentional capture by visual food cues.

    Science.gov (United States)

    Piech, Richard M; Pastorino, Michael T; Zald, David H

    2010-06-01

    While effects of hunger on motivation and food reward value are well-established, far less is known about the effects of hunger on cognitive processes. Here, we deployed the emotional blink of attention paradigm to investigate the impact of visual food cues on attentional capture under conditions of hunger and satiety. Participants were asked to detect targets which appeared in a rapid visual stream after different types of task irrelevant distractors. We observed that food stimuli acquired increased power to capture attention and prevent target detection when participants were hungry. This occurred despite monetary incentives to perform well. Our findings suggest an attentional mechanism through which hunger heightens perception of food cues. As an objective behavioral marker of the attentional sensitivity to food cues, the emotional attentional blink paradigm may provide a useful technique for studying individual differences, and state manipulations in the sensitivity to food cues. Published by Elsevier Ltd.

  10. Atypical Visual Orienting to Gaze- and Arrow-Cues in Adults with High Functioning Autism

    Science.gov (United States)

    Vlamings, Petra H. J. M.; Stauder, Johannes E. A.; van Son, Ilona A. M.; Mottron, Laurent

    2005-01-01

    The present study investigates visual orienting to directional cues (arrow or eyes) in adults with high functioning autism (n = 19) and age matched controls (n = 19). A choice reaction time paradigm is used in which eye-or arrow direction correctly (congruent) or incorrectly (incongruent) cues target location. In typically developing participants,…

  11. Forgotten but not gone: Retro-cue costs and benefits in a double-cueing paradigm suggest multiple states in visual short-term memory.

    Science.gov (United States)

    van Moorselaar, Dirk; Olivers, Christian N L; Theeuwes, Jan; Lamme, Victor A F; Sligte, Ilja G

    2015-11-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM when made relevant again by a subsequent second cue. We presented either 1 or 2 consecutive retro-cues (80% valid) during the retention interval of a change-detection task. Relative to no cue, a valid cue increased VSTM capacity by 2 items, while an invalid cue decreased capacity by 2. Importantly, when a second, valid cue followed an invalid cue, capacity regained 2 items, so that performance was back on par. In addition, when the second cue was also invalid, there was no extra loss of information from VSTM, suggesting that those items that survived a first invalid cue, automatically also survived a second. We conclude that these results are in support of a very versatile VSTM system, in which memoranda adopt different representational states depending on whether they are deemed relevant now, in the future, or not at all. We discuss a neural model that is consistent with this conclusion. (c) 2015 APA, all rights reserved).

  12. Awareness in contextual cueing of visual search as measured with concurrent access- and phenomenal-consciousness tasks.

    Science.gov (United States)

    Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas

    2012-10-25

    In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.

  13. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of Gait in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Sabine Janssen

    2017-06-01

    Full Text Available External cueing is a potentially effective strategy to reduce freezing of gait (FOG in persons with Parkinson’s disease (PD. Case reports suggest that three-dimensional (3D cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual

  14. Neural response to visual sexual cues in dopamine treatment-linked hypersexuality in Parkinson's disease.

    Science.gov (United States)

    Politis, Marios; Loane, Clare; Wu, Kit; O'Sullivan, Sean S; Woodhead, Zoe; Kiferle, Lorenzo; Lawrence, Andrew D; Lees, Andrew J; Piccini, Paola

    2013-02-01

    Hypersexuality with compulsive sexual behaviour is a significant source of morbidity for patients with Parkinson's disease receiving dopamine replacement therapies. We know relatively little about the pathophysiology of hypersexuality in Parkinson's disease, and it is unknown how visual sexual stimuli, similar to the portrayals of sexuality in the mainstream mass media may affect the brain and behaviour in such susceptible individuals. Here, we have studied a group of 12 patients with Parkinson's disease with hypersexuality using a functional magnetic resonance imaging block design exposing participants to both sexual, other reward-related and neutral visual cues. We hypothesized that exposure to visual sexual cues would trigger increased sexual desire in patients with Parkinson's disease with hypersexuality that would correspond to changes in brain activity in regions linked to dopaminergically stimulated sexual motivation. Patients with Parkinson's disease with hypersexuality were scanned ON and OFF dopamine drugs, and their results were compared with a group of 12 Parkinson's disease control patients without hypersexuality or other impulse control disorders. Exposure to sexual cues significantly increased sexual desire and hedonic responses in the Parkinson's disease hypersexuality group compared with the Parkinson's disease control patients. These behavioural changes corresponded to significant blood oxygen level-dependent signal changes in regions within limbic, paralimbic, temporal, occipital, somatosensory and prefrontal cortices that correspond to emotional, cognitive, autonomic, visual and motivational processes. The functional imaging data showed that the hypersexuality patients' increased sexual desire correlated with enhanced activations in the ventral striatum, and cingulate and orbitofrontal cortices. When the patients with Parkinson's disease with hypersexuality were OFF medication, the functional imaging data showed decreases in activation during

  15. Head-body ratio as a visual cue for stature in people and sculptural art

    OpenAIRE

    Mather, George

    2010-01-01

    Body size is crucial for determining the outcome of competition for resources and mates. Many species use acoustic cues to measure caller body size. Vision is the pre-eminent sense for humans, but visual depth cues are of limited utility in judgments of absolute body size. The reliability of internal body proportion as a potential cue to stature was assessed with a large sample of anthropometric data, and the ratio of head height to body height (HBR) was found to be highly correlated with sta...

  16. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  17. Spatially valid proprioceptive cues improve the detection of a visual stimulus

    DEFF Research Database (Denmark)

    Jackson, Carl P T; Miall, R Chris; Balslev, Daniela

    2010-01-01

    , which has been demonstrated for other modality pairings. The aim of this study was to test whether proprioceptive signals can spatially cue a visual target to improve its detection. Participants were instructed to use a planar manipulandum in a forward reaching action and determine during this movement...

  18. Haven't a Cue? Mapping the CUE Space as an Aid to HRA Modeling

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; Ronald L Boring; Jacques Hugo; William Phoenix

    2012-06-01

    Advances in automation present a new modeling environment for the human reliability analysis (HRA) practitioner. Many, if not most, current day HRA methods have their origin in characterizing and quantifying human performance in analog environments where mode awareness and system status indications are potentially less comprehensive, but simpler to comprehend at a glance when compared to advanced presentation systems. The introduction of highly complex automation has the potential to lead to: decreased levels of situation awareness caused by the need for increased monitoring; confusion regarding the often non-obvious causes of automation failures, and emergent system dependencies that formerly may have been uncharacterized. Understanding the relation of incoming cues available to operators during plant upset conditions, in conjunction with operating procedures, yields insight into understanding the nature of the expected operator response in this control room environment. Static systems methods such as fault trees do not contain the appropriate temporal information or necessarily specify the relationship among cues leading to operator response. In this paper, we do not attempt to replace standard performance shaping factors commonly used in HRA nor offer a new HRA method, existing methods may suffice. In this paper we strive to enhance current understanding of the basis for operator response through a technique that can be used during the qualitative portion of the HRA analysis process. The CUE map is a means to visualize the relationship among salient cues in the control room that help influence operator response, show how the cognitive map of the operator changes as information is gained or lost, and is applicable to existing as well as advanced hybrid plants and small modular reactor designs. A brief application involving loss of condensate is presented and advantages and limitations of the modeling approach and use of the CUE map are discussed.

  19. How task demands shape brain responses to visual food cues.

    Science.gov (United States)

    Pohl, Tanja Maria; Tempelmann, Claus; Noesselt, Toemme

    2017-06-01

    Several previous imaging studies have aimed at identifying the neural basis of visual food cue processing in humans. However, there is little consistency of the functional magnetic resonance imaging (fMRI) results across studies. Here, we tested the hypothesis that this variability across studies might - at least in part - be caused by the different tasks employed. In particular, we assessed directly the influence of task set on brain responses to food stimuli with fMRI using two tasks (colour vs. edibility judgement, between-subjects design). When participants judged colour, the left insula, the left inferior parietal lobule, occipital areas, the left orbitofrontal cortex and other frontal areas expressed enhanced fMRI responses to food relative to non-food pictures. However, when judging edibility, enhanced fMRI responses to food pictures were observed in the superior and middle frontal gyrus and in medial frontal areas including the pregenual anterior cingulate cortex and ventromedial prefrontal cortex. This pattern of results indicates that task sets can significantly alter the neural underpinnings of food cue processing. We propose that judging low-level visual stimulus characteristics - such as colour - triggers stimulus-related representations in the visual and even in gustatory cortex (insula), whereas discriminating abstract stimulus categories activates higher order representations in both the anterior cingulate and prefrontal cortex. Hum Brain Mapp 38:2897-2912, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. The Impact of Visual Cues and Service Behavior on the Consumer Retail Experience

    OpenAIRE

    Bjerk, Taylor

    2015-01-01

    With product differentiation low in the retail industry, businesses need to create strong brand images and increase customer loyalty in order to remain competitive. Visual merchandising is one tool that businesses have to communicate their message in a compelling and strategic manner. Within the scope of visual merchandising there are a number of atmospherics, or cues, which include visual, tactile, and auditory, that can be used in conjunction with one another to influence consumer behavior....

  1. Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.

    Science.gov (United States)

    Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco

    2011-09-20

    Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.

  2. A review of visual cues associated with food on food acceptance and consumption.

    Science.gov (United States)

    Wadhera, Devina; Capaldi-Phillips, Elizabeth D

    2014-01-01

    Several sensory cues affect food intake including appearance, taste, odor, texture, temperature, and flavor. Although taste is an important factor regulating food intake, in most cases, the first sensory contact with food is through the eyes. Few studies have examined the effects of the appearance of a food portion on food acceptance and consumption. The purpose of this review is to identify the various visual factors associated with food such as proximity, visibility, color, variety, portion size, height, shape, number, volume, and the surface area and their effects on food acceptance and consumption. We suggest some ways that visual cues can be used to increase fruit and vegetable intake in children and decrease excessive food intake in adults. In addition, we discuss the need for future studies that can further establish the relationship between several unexplored visual dimensions of food (specifically shape, number, size, and surface area) and food intake. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  3. Visual search and contextual cueing: differential effects in 10-year-old children and adults.

    Science.gov (United States)

    Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M

    2011-02-01

    The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.

  4. Technology-Assisted Rehabilitation of Writing Skills in Parkinson’s Disease: Visual Cueing versus Intelligent Feedback

    Directory of Open Access Journals (Sweden)

    Evelien Nackaerts

    2017-01-01

    Full Text Available Recent research showed that visual cueing can have both beneficial and detrimental effects on handwriting of patients with Parkinson’s disease (PD and healthy controls depending on the circumstances. Hence, using other sensory modalities to deliver cueing or feedback may be a valuable alternative. Therefore, the current study compared the effects of short-term training with either continuous visual cues or intermittent intelligent verbal feedback. Ten PD patients and nine healthy controls were randomly assigned to one of these training modes. To assess transfer of learning, writing performance was assessed in the absence of cueing and feedback on both trained and untrained writing sequences. The feedback pen and a touch-sensitive writing tablet were used for testing. Both training types resulted in improved writing amplitudes for the trained and untrained sequences. In conclusion, these results suggest that the feedback pen is a valuable tool to implement writing training in a tailor-made fashion for people with PD. Future studies should include larger sample sizes and different subgroups of PD for long-term training with the feedback pen.

  5. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  7. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    Science.gov (United States)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  8. Response of hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo to visual and chemical cues arising from prey.

    Science.gov (United States)

    Chiszar, David; Krauss, Susan; Shipley, Bryon; Trout, Tim; Smith, Hobart M

    2009-01-01

    Five hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo were observed in two experiments that studied the effects of visual and chemical cues arising from prey. Rate of tongue flicking was recorded in Experiment 1, and amount of time the lizards spent interacting with stimuli was recorded in Experiment 2. Our hypothesis was that young V. komodoensis would be more dependent upon vision than chemoreception, especially when dealing with live, moving, prey. Although visual cues, including prey motion, had a significant effect, chemical cues had a far stronger effect. Implications of this falsification of our initial hypothesis are discussed.

  9. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  10. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    Science.gov (United States)

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  11. Generating physical symptoms from visual cues: An experimental study

    OpenAIRE

    Ogden, J; Zoukas, S

    2010-01-01

    This experimental study explored whether the physical symptoms of cold, pain and itchiness could be generated by visual cues, whether they varied in the ease with which they could be generated and whether they were related to negative affect. Participants were randomly allocated by group to watch one of three videos relating to cold (e.g. ice, snow, wind), pain (e.g. sporting injuries, tattoos) or itchiness (e.g. head lice, scratching). They then rated their self-reported symptoms of cold, pa...

  12. Sensory modality of smoking cues modulates neural cue reactivity.

    Science.gov (United States)

    Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J

    2013-01-01

    Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.

  13. Heads First: Visual Aftereffects Reveal Hierarchical Integration of Cues to Social Attention.

    Directory of Open Access Journals (Sweden)

    Sarah Cooney

    Full Text Available Determining where another person is attending is an important skill for social interaction that relies on various visual cues, including the turning direction of the head and body. This study reports a novel high-level visual aftereffect that addresses the important question of how these sources of information are combined in gauging social attention. We show that adapting to images of heads turned 25° to the right or left produces a perceptual bias in judging the turning direction of subsequently presented bodies. In contrast, little to no change in the judgment of head orientation occurs after adapting to extremely oriented bodies. The unidirectional nature of the aftereffect suggests that cues from the human body signaling social attention are combined in a hierarchical fashion and is consistent with evidence from single-cell recording studies in nonhuman primates showing that information about head orientation can override information about body posture when both are visible.

  14. Visual Search and Target Cueing: A Comparison of Head-Mounted Versus Hand-Held Displays on the Allocation of Visual Attention

    National Research Council Canada - National Science Library

    Yeh, Michelle; Wickens, Christopher D

    1998-01-01

    We conducted a study to examine the effects of target cueing and conformality with a hand-held or head-mounted display to determine their effects on visual search tasks requiring focused and divided attention...

  15. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  16. The impact of distracter-target similarity on contextual cueing effects of children and adults.

    Science.gov (United States)

    Yang, Yingying; Merrill, Edward C

    2014-05-01

    Contextual cueing reflects a memory-based attentional guidance process that develops through repeated exposure to displays in which a target location has been consistently paired with a specific context. In two experiments, we compared 20 younger children's (6-7 years old), 20 older children's (9-10 years old), and 20 young adults' (18-21 years old) abilities to acquire contextual cueing effects from displays in which half of the distracters predicted the location of the target and half did not. Across experiments, we varied the similarity between the predictive and nonpredictive distracters and the target. In Experiment 1, the predictive distracters were visually similar to the target and dissimilar from the nonpredictive distracters. In Experiment 2, the nonpredictive distracters were also similar to the target and predictive distracters. All three age groups exhibited contextual cueing in Experiment 1, although the effect was not as strong for the younger children relative to older children and adults. All participants exhibited weaker contextual cueing effects in Experiment 2, with the younger children not exhibiting significant contextual cueing at all. Apparently, when search processes could not be guided to the predictive distracters on the basis of salient stimulus features, younger children in particular experienced difficulty in implicitly identifying and using aspects of the context to facilitate with the acquisition of contextual cueing effects. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    Science.gov (United States)

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  18. Visual perception of dynamic properties: cue heuristics versus direct-perceptual competence.

    Science.gov (United States)

    Runeson, S; Juslin, P; Olsson, H

    2000-07-01

    Constructivist and Gibsonian approaches disagree over the possibility of direct perceptual use of advanced information. A trenchant instance concerns visual perception of underlying dynamic properties as specified by kinematic patterns of events. For the paradigmatic task of discrimination of relative mass in observed collisions, 2 mathematical models are developed, 1 model representing a direct, invariant-based approach, and 1 representing a cue-heuristic approach. The models enable a critical experimental design with distinct predictions concerning performance data and confidence ratings. Although pretraining results were mixed, the invariant-based model was empirically confirmed after a minimal amount of training: Competence entails the use of advanced kinematic information in a direct-perceptual ("sensory") mode of apprehension, in contrast to beginners' use of simpler cues in an inferential ("cognitive") mode.

  19. Ageing diminishes the modulation of human brain responses to visual food cues by meal ingestion.

    Science.gov (United States)

    Cheah, Y S; Lee, S; Ashoor, G; Nathan, Y; Reed, L J; Zelaya, F O; Brammer, M J; Amiel, S A

    2014-09-01

    Rates of obesity are greatest in middle age. Obesity is associated with altered activity of brain networks sensing food-related stimuli and internal signals of energy balance, which modulate eating behaviour. The impact of healthy mid-life ageing on these processes has not been characterised. We therefore aimed to investigate changes in brain responses to food cues, and the modulatory effect of meal ingestion on such evoked neural activity, from young adulthood to middle age. Twenty-four healthy, right-handed subjects, aged 19.5-52.6 years, were studied on separate days after an overnight fast, randomly receiving 50 ml water or 554 kcal mixed meal before functional brain magnetic resonance imaging while viewing visual food cues. Across the group, meal ingestion reduced food cue-evoked activity of amygdala, putamen, insula and thalamus, and increased activity in precuneus and bilateral parietal cortex. Corrected for body mass index, ageing was associated with decreasing food cue-evoked activation of right dorsolateral prefrontal cortex (DLPFC) and precuneus, and increasing activation of left ventrolateral prefrontal cortex (VLPFC), bilateral temporal lobe and posterior cingulate in the fasted state. Ageing was also positively associated with the difference in food cue-evoked activation between fed and fasted states in the right DLPFC, bilateral amygdala and striatum, and negatively associated with that of the left orbitofrontal cortex and VLPFC, superior frontal gyrus, left middle and temporal gyri, posterior cingulate and precuneus. There was an overall tendency towards decreasing modulatory effects of prior meal ingestion on food cue-evoked regional brain activity with increasing age. Healthy ageing to middle age is associated with diminishing sensitivity to meal ingestion of visual food cue-evoked activity in brain regions that represent the salience of food and direct food-associated behaviour. Reduced satiety sensing may have a role in the greater risk of

  20. Visual-gustatory interaction: orbitofrontal and insular cortices mediate the effect of high-calorie visual food cues on taste pleasantness.

    Science.gov (United States)

    Ohla, Kathrin; Toepel, Ulrike; le Coutre, Johannes; Hudry, Julie

    2012-01-01

    Vision provides a primary sensory input for food perception. It raises expectations on taste and nutritional value and drives acceptance or rejection. So far, the impact of visual food cues varying in energy content on subsequent taste integration remains unexplored. Using electrical neuroimaging, we assessed whether high- and low-calorie food cues differentially influence the brain processing and perception of a subsequent neutral electric taste. When viewing high-calorie food images, participants reported the subsequent taste to be more pleasant than when low-calorie food images preceded the identical taste. Moreover, the taste-evoked neural activity was stronger in the bilateral insula and the adjacent frontal operculum (FOP) within 100 ms after taste onset when preceded by high- versus low-calorie cues. A similar pattern evolved in the anterior cingulate (ACC) and medial orbitofrontal cortex (OFC) around 180 ms, as well as, in the right insula, around 360 ms. The activation differences in the OFC correlated positively with changes in taste pleasantness, a finding that is an accord with the role of the OFC in the hedonic evaluation of taste. Later activation differences in the right insula likely indicate revaluation of interoceptive taste awareness. Our findings reveal previously unknown mechanisms of cross-modal, visual-gustatory, sensory interactions underlying food evaluation.

  1. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    OpenAIRE

    Zang, Xuelian; Geyer, Thomas; Assump??o, Leonardo; M?ller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the s...

  2. Object-centered representations support flexible exogenous visual attention across translation and reflection.

    Science.gov (United States)

    Lin, Zhicheng

    2013-11-01

    Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, behavioralperformance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Line-up member similarity influences the effectiveness of a salient rejection option for eyewitnesses

    OpenAIRE

    Bruer, Kaila C.; Fitzgerald, Ryan J.; Therrien, Natalie M.; Price, Heather L.

    2015-01-01

    Visually salient line-up rejection options have not been systematically studied with adult eyewitnesses. We explored the impact of using a non-verbal, salient rejection option on adults' identification accuracy for line-ups containing low- or high-similarity fillers. The non-verbal, salient rejection option had minimal impact on accuracy in low-similarity line-ups, but in high-similarity line-ups its inclusion increased correct rejections for target-absent line-ups as well as incorrect reject...

  4. The Effect of Visual Cueing and Control Design on Children's Reading Achievement of Audio E-Books with Tablet Computers

    Science.gov (United States)

    Wang, Pei-Yu; Huang, Chung-Kai

    2015-01-01

    This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…

  5. Visual and Auditory Cue Effects on Risk Assessment in a Highway Training Simulation

    NARCIS (Netherlands)

    Toet, A.; Houtkamp, J.M.; Meulen, van der R.

    2013-01-01

    We investigated whether manipulation of visual and auditory depth and speed cues can affect a user’s sense of risk for a low-cost nonimmersive virtual environment (VE) representing a highway environment with traffic incidents. The VE is currently used in an examination program to assess procedural

  6. Visual and auditory cue effects on risk assessment in a highway training simulation

    NARCIS (Netherlands)

    Toet, A.; Houtkamp, J.M.; Meulen, R. van der

    2013-01-01

    We investigated whether manipulation of visual and auditory depth and speed cues can affect a user’s sense of risk for a low-cost nonimmersive virtual environment (VE) representing a highway environment with traffic incidents. The VE is currently used in an examination program to assess procedural

  7. The role of haptic versus visual volume cues in the size-weight illusion.

    Science.gov (United States)

    Ellis, R R; Lederman, S J

    1993-03-01

    Three experiments establish the size-weight illusion as a primarily haptic phenomenon, despite its having been more traditionally considered an example of vision influencing haptic processing. Experiment 1 documents, across a broad range of stimulus weights and volumes, the existence of a purely haptic size-weight illusion, equal in strength to the traditional illusion. Experiment 2 demonstrates that haptic volume cues are both sufficient and necessary for a full-strength illusion. In contrast, visual volume cues are merely sufficient, and produce a relatively weaker effect. Experiment 3 establishes that congenitally blind subjects experience an effect as powerful as that of blindfolded sighted observers, thus demonstrating that visual imagery is also unnecessary for a robust size-weight illusion. The results are discussed in terms of their implications for both sensory and cognitive theories of the size-weight illusion. Applications of this work to a human factors design and to sensor-based systems for robotic manipulation are also briefly considered.

  8. Designing and Evaluation of Reliability and Validity of Visual Cue-Induced Craving Assessment Task for Methamphetamine Smokers

    Directory of Open Access Journals (Sweden)

    Hamed Ekhtiari

    2010-08-01

    Full Text Available A B S T R A C TIntroduction: Craving to methamphetamine is a significant health concern and exposure to methamphetamine cues in laboratory can induce craving. In this study, a task designing procedure for evaluating methamphetamine cue-induced craving in laboratory conditions is examined. Methods: First a series of visual cues which could induce craving was identified by 5 discussion sessions between expert clinicians and 10 methamphetamine smokers. Cues were categorized in 4 main clusters and photos were taken for each cue in studio, then 60 most evocative photos were selected and 10 neutral photos were added. In this phase, 50 subjects with methamphetamine dependence, had exposure to cues and rated craving intensity induced by the 72 cues (60 active evocative photos + 10 neutral photos on self report Visual Analogue Scale (ranging from 0-100. In this way, 50 photos with high levels of evocative potency (CICT 50 and 10 photos with the most evocative potency (CICT 10 were obtained and subsequently, the task was designed. Results: The task reliability (internal consistency was measured by Cronbach’s alpha which was 91% for (CICT 50 and 71% for (CICT 10. The most craving induced was reported for category Drug use procedure (66.27±30.32 and least report for category Cues associated with drug use (31.38±32.96. Difference in cue-induced craving in (CICT 50 and (CICT 10 were not associated with age, education, income, marital status, employment and sexual activity in the past 30 days prior to study entry. Family living condition was marginally correlated with higher scores in (CICT 50. Age of onset for (opioids, cocaine and methamphetamine was negatively correlated with (CICT 50 and (CICT 10 and age of first opiate use was negatively correlated with (CICT 50. Discussion: Cue-induced craving for methamphetamine may be reliably measured by tasks designed in laboratory and designed assessment tasks can be used in cue reactivity paradigm, and

  9. Neural responses to visual food cues according to weight status: a systematic review of functional magnetic resonance imaging studies.

    Science.gov (United States)

    Pursey, Kirrilly M; Stanwell, Peter; Callister, Robert J; Brain, Katherine; Collins, Clare E; Burrows, Tracy L

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies.

  10. Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality.

    Science.gov (United States)

    Pritchard, Stephen C; Zopf, Regine; Polito, Vince; Kaplan, David M; Williams, Mark A

    2016-01-01

    The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual-tactile synchrony , and visual-proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.

  11. Express attentional re-engagement but delayed entry into consciousness following invalid spatial cues in visual search.

    Directory of Open Access Journals (Sweden)

    Benoit Brisson

    Full Text Available BACKGROUND: In predictive spatial cueing studies, reaction times (RT are shorter for targets appearing at cued locations (valid trials than at other locations (invalid trials. An increase in the amplitude of early P1 and/or N1 event-related potential (ERP components is also present for items appearing at cued locations, reflecting early attentional sensory gain control mechanisms. However, it is still unknown at which stage in the processing stream these early amplitude effects are translated into latency effects. METHODOLOGY/PRINCIPAL FINDINGS: Here, we measured the latency of two ERP components, the N2pc and the sustained posterior contralateral negativity (SPCN, to evaluate whether visual selection (as indexed by the N2pc and visual-short term memory processes (as indexed by the SPCN are delayed in invalid trials compared to valid trials. The P1 was larger contralateral to the cued side, indicating that attention was deployed to the cued location prior to the target onset. Despite these early amplitude effects, the N2pc onset latency was unaffected by cue validity, indicating an express, quasi-instantaneous re-engagement of attention in invalid trials. In contrast, latency effects were observed for the SPCN, and these were correlated to the RT effect. CONCLUSIONS/SIGNIFICANCE: Results show that latency differences that could explain the RT cueing effects must occur after visual selection processes giving rise to the N2pc, but at or before transfer in visual short-term memory, as reflected by the SPCN, at least in discrimination tasks in which the target is presented concurrently with at least one distractor. Given that the SPCN was previously associated to conscious report, these results further show that entry into consciousness is delayed following invalid cues.

  12. The Role of Inhibition in Avoiding Distraction by Salient Stimuli.

    Science.gov (United States)

    Gaspelin, Nicholas; Luck, Steven J

    2018-01-01

    Researchers have long debated whether salient stimuli can involuntarily 'capture' visual attention. We review here evidence for a recently discovered inhibitory mechanism that may help to resolve this debate. This evidence suggests that salient stimuli naturally attempt to capture attention, but capture can be avoided if the salient stimulus is suppressed before it captures attention. Importantly, the suppression process can be more or less effective as a result of changing task demands or lapses in cognitive control. Converging evidence for the existence of this suppression mechanism comes from multiple sources, including psychophysics, eye-tracking, and event-related potentials (ERPs). We conclude that the evidence for suppression is strong, but future research will need to explore the nature and limits of this mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The effect of pistachio shells as a visual cue in reducing caloric consumption.

    Science.gov (United States)

    Kennedy-Hagan, K; Painter, J E; Honselman, C; Halvorson, A; Rhodes, K; Skwir, K

    2011-10-01

    It was hypothesized that pistachio shells left in sight as visual cues of consumption will cause individuals to consume less. A convenience sample of faculty and staff at a mid-western university (n=118) were recruited as subjects for the study. The subjects were told they were going to evaluate a variety of brands of pistachios and were surveyed at the end of each day to determine their fullness and satisfaction. The subjects were offered pistachios on their desks for an 8-h period on two separate days and were able to consume the pistachios at their leisure during that time. Subjects began each day with a sixteen ounce bowl filled with four ounces of pistachios in the shell. They were also provided with a second sixteen ounce bowl, in which they were instructed to place the empty shells from the pistachios they consumed. Every 2 h throughout the day pistachios were added in two ounce increments. In condition one, the shells remained in the bowls until the end of the day, whereas in condition two, the shell bowls were emptied every 2 h throughout the day. In condition one, subjects consumed an average of 216 calories. In condition two, subjects consumed an average of 264 calories, a difference of 48 calories. Subjects in condition one consumed significantly (p≤.05) fewer calories, yet fullness and satisfaction ratings were not significantly (p≥.05) different between conditions. Leaving pistachio shells as a visual cue to consumption may help consumers consume fewer calories. Individuals will be aware of the impact of visual cues of dietary intake on total food consumption. Published by Elsevier Ltd.

  14. How do visual and postural cues combine for self-tilt perception during slow pitch rotations?

    Science.gov (United States)

    Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L

    2014-11-01

    Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Psychogenic and neural visual-cue response in PD dopamine dysregulation syndrome.

    Science.gov (United States)

    Loane, Clare; Wu, Kit; O'Sullivan, Sean S; Lawrence, Andrew D; Woodhead, Zoe; Lees, Andrew J; Piccini, Paola; Politis, Marios

    2015-11-01

    Dopamine dysregulation syndrome (DDS) in Parkinson's disease (PD) patients refers to the compulsive use of dopaminergic replacement therapy and has serious psycho-social consequences. Mechanisms underlying DDS are not clear although has been linked to dysfunctional brain reward networks. With fMRI, we investigate behavioral and neural response to drug-cues in six PD DDS patients and 12 PD control patients in both the ON and OFF medication state. Behavioral measures of liking, wanting and subjectively 'feeling ON medication' were also collected. Behaviorally, PD DDS patients feel less ON and want their drugs more at baseline compared to PD controls. Following drug-cue exposure, PD DDS patients feel significantly more ON medication, which correlates with significant increases in reward related regions. The results demonstrate that exposure to drug-cues increases the subjective feeling of being 'ON' medication which corresponds to dysfunctional activation in reward related regions in PD DDS patients. These findings should be extended in future studies. Visual stimuli being sufficient to elicit behavioral response through neuroadaptations could have direct implications to the management of addictive behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. The Impact of Salient Advertisements on Reading and Attention on Web Pages

    Science.gov (United States)

    Simola, Jaana; Kuisma, Jarmo; Oorni, Anssi; Uusitalo, Liisa; Hyona, Jukka

    2011-01-01

    Human vision is sensitive to salient features such as motion. Therefore, animation and onset of advertisements on Websites may attract visual attention and disrupt reading. We conducted three eye tracking experiments with authentic Web pages to assess whether (a) ads are efficiently ignored, (b) ads attract overt visual attention and disrupt…

  17. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    Science.gov (United States)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  18. What You Don't Notice Can Harm You: Age-Related Differences in Detecting Concurrent Visual, Auditory, and Tactile Cues.

    Science.gov (United States)

    Pitts, Brandon J; Sarter, Nadine

    2018-06-01

    Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.

  19. A novel role for visual perspective cues in the neural computation of depth.

    Science.gov (United States)

    Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C

    2015-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

  20. Action Speaks Louder than Words: Young Children Differentially Weight Perceptual, Social, and Linguistic Cues to Learn Verbs

    Science.gov (United States)

    Brandone, Amanda C.; Pence, Khara L.; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathy

    2007-01-01

    This paper explores how children use two possible solutions to the verb-mapping problem: attention to perceptually salient actions and attention to social and linguistic information (speaker cues). Twenty-two-month-olds attached a verb to one of two actions when perceptual cues (presence/absence of a result) coincided with speaker cues but not…

  1. Inactivation of the Lateral Entorhinal Area Increases the Influence of Visual Cues on Hippocampal Place Cell Activity

    Directory of Open Access Journals (Sweden)

    Kristin M. Scaplen

    2017-05-01

    Full Text Available The hippocampus is important for both navigation and associative learning. We previously showed that the hippocampus processes two-dimensional (2D landmarks and objects differently. Our findings suggested that landmarks are more likely to be used for orientation and navigation, whereas objects are more likely to be used for associative learning. The process by which cues are recognized as relevant for navigation or associative learning, however, is an open question. Presumably both spatial and nonspatial information are necessary for classifying cues as landmarks or objects. The lateral entorhinal area (LEA is a good candidate for participating in this process as it is implicated in the processing of three-dimensional (3D objects and object location. Because the LEA is one synapse upstream of the hippocampus and processes both spatial and nonspatial information, it is reasonable to hypothesize that the LEA modulates how the hippocampus uses 2D landmarks and objects. To test this hypothesis, we temporarily inactivated the LEA ipsilateral to the dorsal hippocampal recording site using fluorophore-conjugated muscimol (FCM 30 min prior to three foraging sessions in which either the 2D landmark or the 2D object was back-projected to the floor of an open field. Prior to the second session we rotated the 2D cue by 90°. Cues were returned to the original configuration for the third session. Compared to the Saline treatment, FCM inactivation increased the percentage of rotation responses to manipulations of the landmark cue, but had no effect on information content of place fields. In contrast, FCM inactivation increased information content of place fields in the presence of the object cue, but had no effect on rotation responses to the object cue. Thus, LEA inactivation increased the influence of visual cues on hippocampal activity, but the impact was qualitatively different for cues that are useful for navigation vs. cues that may not be useful for

  2. The response of guide dogs and pet dogs (Canis familiaris) to cues of human referential communication (pointing and gaze).

    Science.gov (United States)

    Ittyerah, Miriam; Gaunet, Florence

    2009-03-01

    The study raises the question of whether guide dogs and pet dogs are expected to differ in response to cues of referential communication given by their owners; especially since guide dogs grow up among sighted humans, and while living with their blind owners, they still have interactions with several sighted people. Guide dogs and pet dogs were required to respond to point, point and gaze, gaze and control cues of referential communication given by their owners. Results indicate that the two groups of dogs do not differ from each other, revealing that the visual status of the owner is not a factor in the use of cues of referential communication. Both groups of dogs have higher frequencies of performance and faster latencies for the point and the point and gaze cues as compared to gaze cue only. However, responses to control cues are below chance performance for the guide dogs, whereas the pet dogs perform at chance. The below chance performance of the guide dogs may be explained by a tendency among them to go and stand by the owner. The study indicates that both groups of dogs respond similarly in normal daily dyadic interaction with their owners and the lower comprehension of the human gaze may be a less salient cue among dogs in comparison to the pointing gesture.

  3. Are Distal and Proximal Visual Cues Equally Important during Spatial Learning in Mice? A Pilot Study of Overshadowing in the Spatial Domain

    Directory of Open Access Journals (Sweden)

    Marie Hébert

    2017-06-01

    Full Text Available Animals use distal and proximal visual cues to accurately navigate in their environment, with the possibility of the occurrence of associative mechanisms such as cue competition as previously reported in honey-bees, rats, birds and humans. In this pilot study, we investigated one of the most common forms of cue competition, namely the overshadowing effect, between visual landmarks during spatial learning in mice. To this end, C57BL/6J × Sv129 mice were given a two-trial place recognition task in a T-maze, based on a novelty free-choice exploration paradigm previously developed to study spatial memory in rodents. As this procedure implies the use of different aspects of the environment to navigate (i.e., mice can perceive from each arm of the maze, we manipulated the distal and proximal visual landmarks during both the acquisition and retrieval phases. Our prospective findings provide a first set of clues in favor of the occurrence of an overshadowing between visual cues during a spatial learning task in mice when both types of cues are of the same modality but at varying distances from the goal. In addition, the observed overshadowing seems to be non-reciprocal, as distal visual cues tend to overshadow the proximal ones when competition occurs, but not vice versa. The results of the present study offer a first insight about the occurrence of associative mechanisms during spatial learning in mice, and may open the way to promising new investigations in this area of research. Furthermore, the methodology used in this study brings a new, useful and easy-to-use tool for the investigation of perceptive, cognitive and/or attentional deficits in rodents.

  4. Chemical cues from fish heighten visual sensitivity in larval crabs through changes in photoreceptor structure and function.

    Science.gov (United States)

    Charpentier, Corie L; Cohen, Jonathan H

    2015-11-01

    Several predator avoidance strategies in zooplankton rely on the use of light to control vertical position in the water column. Although light is the primary cue for such photobehavior, predator chemical cues or kairomones increase swimming responses to light. We currently lack a mechanistic understanding for how zooplankton integrate visual and chemical cues to mediate phenotypic plasticity in defensive photobehavior. In marine systems, kairomones are thought to be amino sugar degradation products of fish body mucus. Here, we demonstrate that increasing concentrations of fish kairomones heightened sensitivity of light-mediated swimming behavior for two larval crab species (Rhithropanopeus harrisii and Hemigrapsus sanguineus). Consistent with these behavioral results, we report increased visual sensitivity at the retinal level in larval crab eyes directly following acute (1-3 h) kairomone exposure, as evidenced electrophysiologically from V-log I curves and morphologically from wider, shorter rhabdoms. The observed increases in visual sensitivity do not correspond with a decline in temporal resolution, because latency in electrophysiological responses actually increased after kairomone exposure. Collectively, these data suggest that phenotypic plasticity in larval crab photobehavior is achieved, at least in part, through rapid changes in photoreceptor structure and function. © 2015. Published by The Company of Biologists Ltd.

  5. Cue-reactivity in experienced electronic cigarette users: Novel stimulus videos and a pilot fMRI study

    Science.gov (United States)

    Nichols, Travis T.; Foulds, Jonathan; Yingst, Jessica; Veldheer, Susan; Hrabovsky, Shari; Richie, John; Eissenberg, Thomas; Wilson, Stephen J.

    2015-01-01

    Some individuals who try electronic cigarettes (e-cigarettes) continue to use long-term. Previous research has investigated the safety of e-cigarettes and their potential for use in smoking cessation, but comparatively little research has explored chronic or habitual e-cigarette use. In particular, the relationship between e-cigarette cues and craving is unknown. We sought to bridge this gap by developing a novel set of e-cigarette (salient) and electronic toothbrush (neutral) videos for use in cue-reactivity paradigms. Additionally, we demonstrate the utility of this approach in a pilot fMRI study of 7 experienced e-cigarette users. Participants were scanned while viewing the cue videos before and after 10 minute use of their own e-cigarettes (producing an 11.7 ng/ml increase in plasma nicotine concentration). A significant session (pre- and post-use) by video type (salient and neutral) interaction was exhibited in many sensorimotor areas commonly activated in other cue-reactivity paradigms. We did not detect significant cue-related activity in other brain regions notable in the craving literature. Possible reasons for this discrepancy are discussed, including the importance of matching cue stimuli to participants’ experiences. PMID:26478134

  6. Cue-reactivity in experienced electronic cigarette users: Novel stimulus videos and a pilot fMRI study.

    Science.gov (United States)

    Nichols, Travis T; Foulds, Jonathan; Yingst, Jessica M; Veldheer, Susan; Hrabovsky, Shari; Richie, John; Eissenberg, Thomas; Wilson, Stephen J

    2016-05-01

    Some individuals who try electronic cigarettes (e-cigarettes) continue to use long-term. Previous research has investigated the safety of e-cigarettes and their potential for use in smoking cessation, but comparatively little research has explored chronic or habitual e-cigarette use. In particular, the relationship between e-cigarette cues and craving is unknown. We sought to bridge this gap by developing a novel set of e-cigarette (salient) and electronic toothbrush (neutral) videos for use in cue-reactivity paradigms. Additionally, we demonstrate the utility of this approach in a pilot fMRI study of 7 experienced e-cigarette users. Participants were scanned while viewing the cue videos before and after 10min use of their own e-cigarettes (producing an 11.7ng/ml increase in plasma nicotine concentration). A significant session (pre- and post-use) by video type (salient and neutral) interaction was exhibited in many sensorimotor areas commonly activated in other cue-reactivity paradigms. We did not detect significant cue-related activity in other brain regions notable in the craving literature. Possible reasons for this discrepancy are discussed, including the importance of matching cue stimuli to participants' experiences. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Audio-visual identification of place of articulation and voicing in white and babble noise.

    Science.gov (United States)

    Alm, Magnus; Behne, Dawn M; Wang, Yue; Eg, Ragnhild

    2009-07-01

    Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.

  8. Oral methylphenidate normalizes cingulate activity in cocaine addiction during a salient cognitive task

    International Nuclear Information System (INIS)

    Goldstein, R.Z.; Woicik, P.A.; Maloney, T.; Tomasi, D.; Alia-Klein, N.; Shan, J.; Honorario, J.; Samaras, D.; Wang, R.; Telang, F.; Wang, G.-J.; Volkow, N.D.

    2010-01-01

    Anterior cingulate cortex (ACC) hypoactivations during cognitive demand are a hallmark deficit in drug addiction. Methylphenidate (MPH) normalizes cortical function, enhancing task salience and improving associated cognitive abilities, in other frontal lobe pathologies; however, in clinical trials, MPH did not improve treatment outcome in cocaine addiction. We hypothesized that oral MPH will attenuate ACC hypoactivations and improve associated performance during a salient cognitive task in individuals with cocaine-use disorders (CUD). In the current functional MRI study, we used a rewarded drug cue-reactivity task previously shown to be associated with hypoactivations in both major ACC subdivisions (implicated in default brain function) in CUD compared with healthy controls. The task was performed by 13 CUD and 14 matched healthy controls on 2 d: after ingesting a single dose of oral MPH (20 mg) or placebo (lactose) in a counterbalanced fashion. Results show that oral MPH increased responses to this salient cognitive task in both major ACC subdivisions (including the caudal-dorsal ACC and rostroventromedial ACC extending to the medial orbitofrontal cortex) in the CUD. These functional MRI results were associated with reduced errors of commission (a common impulsivity measure) and improved task accuracy, especially during the drug (vs. neutral) cue-reactivity condition in all subjects. The clinical application of such MPH-induced brain-behavior enhancements remains to be tested.

  9. Oral methylphenidate normalizes cingulate activity in cocaine addiction during a salient cognitive task

    Energy Technology Data Exchange (ETDEWEB)

    Goldstein, R.Z.; Goldstein, R.Z.; Woicik, P.A.; Maloney, T.; Tomasi, D.; Alia-Klein, N.; Shan, J.; Honorario, J.; Samaras, d.; Wang, R.; Telang, F.; Wang, G.-J.; Volkow, N.D.

    2010-09-21

    Anterior cingulate cortex (ACC) hypoactivations during cognitive demand are a hallmark deficit in drug addiction. Methylphenidate (MPH) normalizes cortical function, enhancing task salience and improving associated cognitive abilities, in other frontal lobe pathologies; however, in clinical trials, MPH did not improve treatment outcome in cocaine addiction. We hypothesized that oral MPH will attenuate ACC hypoactivations and improve associated performance during a salient cognitive task in individuals with cocaine-use disorders (CUD). In the current functional MRI study, we used a rewarded drug cue-reactivity task previously shown to be associated with hypoactivations in both major ACC subdivisions (implicated in default brain function) in CUD compared with healthy controls. The task was performed by 13 CUD and 14 matched healthy controls on 2 d: after ingesting a single dose of oral MPH (20 mg) or placebo (lactose) in a counterbalanced fashion. Results show that oral MPH increased responses to this salient cognitive task in both major ACC subdivisions (including the caudal-dorsal ACC and rostroventromedial ACC extending to the medial orbitofrontal cortex) in the CUD. These functional MRI results were associated with reduced errors of commission (a common impulsivity measure) and improved task accuracy, especially during the drug (vs. neutral) cue-reactivity condition in all subjects. The clinical application of such MPH-induced brain-behavior enhancements remains to be tested.

  10. The Effect of Resolution on Detecting Visually Salient Preattentive Features

    Science.gov (United States)

    2015-06-01

    resolutions in descending order (a–e). The plot compiles the areas of interest displayed in the images and each symbol represents 1 of the images. Data...to particular regions in a scene by highly salient 2 features, for example, the color of the flower discussed in the previous example. These...descending order (a–e). The plot compiles the areas of interest displayed in the images and each symbol represents 1 of the images. Data clusters

  11. Increased Variability and Asymmetric Expansion of the Hippocampal Spatial Representation in a Distal Cue-Dependent Memory Task.

    Science.gov (United States)

    Park, Seong-Beom; Lee, Inah

    2016-08-01

    Place cells in the hippocampus fire at specific positions in space, and distal cues in the environment play critical roles in determining the spatial firing patterns of place cells. Many studies have shown that place fields are influenced by distal cues in foraging animals. However, it is largely unknown whether distal-cue-dependent changes in place fields appear in different ways in a memory task if distal cues bear direct significance to achieving goals. We investigated this possibility in this study. Rats were trained to choose different spatial positions in a radial arm in association with distal cue configurations formed by visual cue sets attached to movable curtains around the apparatus. The animals were initially trained to associate readily discernible distal cue configurations (0° vs. 80° angular separation between distal cue sets) with different food-well positions and then later experienced ambiguous cue configurations (14° and 66°) intermixed with the original cue configurations. Rats showed no difficulty in transferring the associated memory formed for the original cue configurations when similar cue configurations were presented. Place field positions remained at the same locations across different cue configurations, whereas stability and coherence of spatial firing patterns were significantly disrupted when ambiguous cue configurations were introduced. Furthermore, the spatial representation was extended backward and skewed more negatively at the population level when processing ambiguous cue configurations, compared with when processing the original cue configurations only. This effect was more salient for large cue-separation conditions than for small cue-separation conditions. No significant rate remapping was observed across distal cue configurations. These findings suggest that place cells in the hippocampus dynamically change their detailed firing characteristics in response to a modified cue environment and that some of the firing

  12. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  13. Applying extinction research and theory to cue-exposure addiction treatments.

    Science.gov (United States)

    Conklin, Cynthia A; Tiffany, Stephen T

    2002-02-01

    To evaluate the efficacy of cue-exposure addiction treatment and review modern animal learning research to generate recommendations for substantially enhancing the effectiveness of this treatment. Meta-analysis of cue-exposure addiction treatment outcome studies (N=9), review of animal extinction research and theory, and evaluation of whether major principles from this literature are addressed adequately in cue-exposure treatments. The meta-analytical review showed that there is no consistent evidence for the efficacy of cue-exposure treatment as currently implemented. Moreover, procedures derived from the animal learning literature that should maximize the potential of extinction training are rarely used in cue-exposure treatments. Given what is known from animal extinction theory and research about extinguishing learned behavior, it is not surprising that cue-exposure treatments so often fail. This paper reviews current animal research regarding the most salient threats to the development and maintenance of extinction, and suggests several major procedures for increasing the efficacy of cue-exposure addiction treatment.

  14. Magpies can use local cues to retrieve their food caches.

    Science.gov (United States)

    Feenders, Gesa; Smulders, Tom V

    2011-03-01

    Much importance has been placed on the use of spatial cues by food-hoarding birds in the retrieval of their caches. In this study, we investigate whether food-hoarding birds can be trained to use local cues ("beacons") in their cache retrieval. We test magpies (Pica pica) in an active hoarding-retrieval paradigm, where local cues are always reliable, while spatial cues are not. Our results show that the birds use the local cues to retrieve their caches, even when occasionally contradicting spatial information is available. The design of our study does not allow us to test rigorously whether the birds prefer using local over spatial cues, nor to investigate the process through which they learn to use local cues. We furthermore provide evidence that magpies develop landmark preferences, which improve their retrieval accuracy. Our findings support the hypothesis that birds are flexible in their use of memory information, using a combination of the most reliable or salient information to retrieve their caches. © Springer-Verlag 2010

  15. Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach.

    Science.gov (United States)

    Byrne, Patrick A; Crawford, J Douglas

    2010-06-01

    It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.

  16. Laser light visual cueing for freezing of gait in Parkinson disease: A pilot study with male participants.

    Science.gov (United States)

    Bunting-Perry, Lisette; Spindler, Meredith; Robinson, Keith M; Noorigian, Joseph; Cianci, Heather J; Duda, John E

    2013-01-01

    Freezing of gait (FOG) is a debilitating feature of Parkinson disease (PD). In this pilot study, we sought to assess the efficacy of a rolling walker with a laser beam visual cue to treat FOG in PD patients. We recruited 22 subjects with idiopathic PD who experienced on- and off-medication FOG. Subjects performed three walking tasks both with and without the laser beam while on medications. Outcome measures included time to complete tasks, number of steps, and number of FOG episodes. A crossover design allowed within-group comparisons between the two conditions. No significant differences were observed between the two walking conditions across the three tasks. The laser beam, when applied as a visual cue on a rolling walker, did not diminish FOG in this study.

  17. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search.

    Science.gov (United States)

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.

  18. Cuttlefish dynamic camouflage: responses to substrate choice and integration of multiple visual cues.

    Science.gov (United States)

    Allen, Justine J; Mäthger, Lydia M; Barbosa, Alexandra; Buresch, Kendra C; Sogin, Emilia; Schwartz, Jillian; Chubb, Charles; Hanlon, Roger T

    2010-04-07

    Prey camouflage is an evolutionary response to predation pressure. Cephalopods have extensive camouflage capabilities and studying them can offer insight into effective camouflage design. Here, we examine whether cuttlefish, Sepia officinalis, show substrate or camouflage pattern preferences. In the first two experiments, cuttlefish were presented with a choice between different artificial substrates or between different natural substrates. First, the ability of cuttlefish to show substrate preference on artificial and natural substrates was established. Next, cuttlefish were offered substrates known to evoke three main camouflage body pattern types these animals show: Uniform or Mottle (function by background matching); or Disruptive. In a third experiment, cuttlefish were presented with conflicting visual cues on their left and right sides to assess their camouflage response. Given a choice between substrates they might encounter in nature, we found no strong substrate preference except when cuttlefish could bury themselves. Additionally, cuttlefish responded to conflicting visual cues with mixed body patterns in both the substrate preference and split substrate experiments. These results suggest that differences in energy costs for different camouflage body patterns may be minor and that pattern mixing and symmetry may play important roles in camouflage.

  19. Head-body ratio as a visual cue for stature in people and sculptural art.

    Science.gov (United States)

    Mather, George

    2010-01-01

    Body size is crucial for determining the outcome of competition for resources and mates. Many species use acoustic cues to measure caller body size. Vision is the pre-eminent sense for humans, but visual depth cues are of limited utility in judgments of absolute body size. The reliability of internal body proportion as a potential cue to stature was assessed with a large sample of anthropometric data, and the ratio of head height to body height (HBR) was found to be highly correlated with stature. A psychophysical experiment was carried out to investigate whether the cue actually influences stature judgments. Participants were shown pairs of photographs of human figures in which HBR had been manipulated systematically, and asked to select the figure that appeared taller. Results showed that figures with a relatively small HBR were consistently perceived as taller than figures with a relatively large HBR. Many classical statues such as Michelangelo's David depart from the classical proportions defined in Leonardo's Vitruvian Man. A supplementary experiment showed that perceived stature in classical statues also depends on HBR. Michelangelo's David was created with the HBR of a man 165 cm (5 ft 5 in) tall.

  20. Strategy selection in cue-based decision making.

    Science.gov (United States)

    Bryant, David J

    2014-06-01

    People can make use of a range of heuristic and rational, compensatory strategies to perform a multiple-cue judgment task. It has been proposed that people are sensitive to the amount of cognitive effort required to employ decision strategies. Experiment 1 employed a dual-task methodology to investigate whether participants' preference for heuristic versus compensatory decision strategies can be altered by increasing the cognitive demands of the task. As indicated by participants' decision times, a secondary task interfered more with the performance of a heuristic than compensatory decision strategy but did not affect the proportions of participants using either type of strategy. A stimulus set effect suggested that the conjunction of cue salience and cue validity might play a determining role in strategy selection. The results of Experiment 2 indicated that when a perceptually salient cue was also the most valid, the majority of participants preferred a single-cue heuristic strategy. Overall, the results contradict the view that heuristics are more likely to be adopted when a task is made more cognitively demanding. It is argued that people employ 2 learning processes during training, one an associative learning process in which cue-outcome associations are developed by sampling multiple cues, and another that involves the sequential examination of single cues to serve as a basis for a single-cue heuristic.

  1. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Gerry, Lynda; Thomsen, Lui Albæk

    2017-01-01

    Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the "pop-out effect" can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally...... synched with a pop-out effect can improve reaction time in a visual search task, called the "pip and pop effect" [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo...

  2. The time course of attentional deployment in contextual cueing.

    Science.gov (United States)

    Jiang, Yuhong V; Sigstad, Heather M; Swallow, Khena M

    2013-04-01

    The time course of attention is a major characteristic on which different types of attention diverge. In addition to explicit goals and salient stimuli, spatial attention is influenced by past experience. In contextual cueing, behaviorally relevant stimuli are more quickly found when they appear in a spatial context that has previously been encountered than when they appear in a new context. In this study, we investigated the time that it takes for contextual cueing to develop following the onset of search layout cues. In three experiments, participants searched for a T target in an array of Ls. Each array was consistently associated with a single target location. In a testing phase, we manipulated the stimulus onset asynchrony (SOA) between the repeated spatial layout and the search display. Contextual cueing was equivalent for a wide range of SOAs between 0 and 1,000 ms. The lack of an increase in contextual cueing with increasing cue durations suggests that as an implicit learning mechanism, contextual cueing cannot be effectively used until search begins.

  3. Aversive aftertaste changes visual food cue reactivity: An fMRI study on cross-modal perception.

    Science.gov (United States)

    Wabnegger, Albert; Schwab, Daniela; Schienle, Anne

    2018-04-23

    In western cultures, we are surrounded by appealing visual food cues that stimulate our desire to eat, overeating and subsequent weight gain. Cognitive control of appetite (reappraisal) requires substantial attentional resources and effort in order to work. Therefore, we tested an alternative approach for appetite regulation via functional magnetic resonance imaging. Healthy, normal-weight women were presented with images depicting food (high-/low-caloric), once in combination with a bitter aftertaste (a gustatory stop signal) and once with a neutral taste (water), in a retest design. The aversive aftertaste elicited increased activation in the orbitofrontal/dorsolateral prefrontal cortex (OFC, DLPFC), striatum and frontal operculum during the viewing of high-caloric food (vs. low-caloric food). In addition, the increase in DLPFC activity to high-caloric food in the bitter condition was correlated with reported appetite reduction. The findings indicate that this aftertaste procedure was able to reduce the appetitive value of visual food cues. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Amplitude modulation of sexy phrases is salient for song attractiveness in female canaries (Serinus canaria).

    Science.gov (United States)

    Pasteau, Magali; Ung, Davy; Kreutzer, Michel; Aubin, Thierry

    2012-07-01

    Song discrimination and recognition in songbird species have usually been studied by measuring responses to song playbacks. In female canaries, Serinus canaria, copulation solicitation displays (CSDs) are used as an index of female preferences, which are related to song recognition. Despite the fact that many studies underline the role of song syntax in this species, we observed that short segments of songs (a few seconds long) are enough for females to discriminate between conspecific and heterospecific songs, whereas such a short duration is not sufficient to identify the syntax rules. This suggests that other cues are salient for song recognition. In this experiment, we investigated the influence of amplitude modulation (AM) on the responses (CSDs) of female canaries to song playbacks. We used two groups of females: (1) raised in acoustic isolation and (2) raised in normal conditions. When adult, we tested their preferences for sexy phrases with different AMs. We broadcast three types of stimuli: (1) songs with natural canary AM, (2) songs with AM removed, or (3) song with wren Troglodytes troglodytes AM. Results indicate that female canaries prefer and have predispositions for a song type with the natural canary AM. Thus, this acoustic parameter is a salient cue for song attractiveness.

  5. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  6. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    Science.gov (United States)

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  7. A novel visual saliency detection method for infrared video sequences

    Science.gov (United States)

    Wang, Xin; Zhang, Yuzhen; Ning, Chen

    2017-12-01

    Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.

  8. Evidence for impairments in using static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism.

    Science.gov (United States)

    Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B

    2008-09-01

    We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.

  9. Geometric Cues, Reference Frames, and the Equivalence of Experienced-Aligned and Novel-Aligned Views in Human Spatial Memory

    Science.gov (United States)

    Kelly, Jonathan W.; Sjolund, Lori A.; Sturz, Bradley R.

    2013-01-01

    Spatial memories are often organized around reference frames, and environmental shape provides a salient cue to reference frame selection. To date, however, the environmental cues responsible for influencing reference frame selection remain relatively unknown. To connect research on reference frame selection with that on orientation via…

  10. Multiple reward-cue contingencies favor expectancy over uncertainty in shaping the reward-cue attentional salience.

    Science.gov (United States)

    De Tommaso, Matteo; Mastropasqua, Tommaso; Turatto, Massimo

    2018-01-25

    Reward-predicting cues attract attention because of their motivational value. A debated question regards the conditions under which the cue's attentional salience is governed more by reward expectancy rather than by reward uncertainty. To help shedding light on this relevant issue, here, we manipulated expectancy and uncertainty using three levels of reward-cue contingency, so that, for example, a high level of reward expectancy (p = .8) was compared with the highest level of reward uncertainty (p = .5). In Experiment 1, the best reward-cue during conditioning was preferentially attended in a subsequent visual search task. This result was replicated in Experiment 2, in which the cues were matched in terms of response history. In Experiment 3, we implemented a hybrid procedure consisting of two phases: an omission contingency procedure during conditioning, followed by a visual search task as in the previous experiments. Crucially, during both phases, the reward-cues were never task relevant. Results confirmed that, when multiple reward-cue contingencies are explored by a human observer, expectancy is the major factor controlling both the attentional and the oculomotor salience of the reward-cue.

  11. Electrophysiological indices of visual food cue-reactivity. Differences in obese, overweight and normal weight women.

    Science.gov (United States)

    Hume, David John; Howells, Fleur Margaret; Rauch, H G Laurie; Kroff, Jacolene; Lambert, Estelle Victoria

    2015-02-01

    Heightened food cue-reactivity in overweight and obese individuals has been related to aberrant functioning of neural circuitry implicated in motivational behaviours and reward-seeking. Here we explore the neurophysiology of visual food cue-reactivity in overweight and obese women, as compared with normal weight women, by assessing differences in cortical arousal and attentional processing elicited by food and neutral image inserts in a Stroop task with record of EEG spectral band power and ERP responses. Results show excess right frontal (F8) and left central (C3) relative beta band activity in overweight women during food task performance (indicative of pronounced early visual cue-reactivity) and blunted prefrontal (Fp1 and Fp2) theta band activity in obese women during office task performance (suggestive of executive dysfunction). Moreover, as compared to normal weight women, food images elicited greater right parietal (P4) ERP P200 amplitude in overweight women (denoting pronounced early attentional processing) and shorter right parietal (P4) ERP P300 latency in obese women (signifying enhanced and efficient maintained attentional processing). Differential measures of cortical arousal and attentional processing showed significant correlations with self-reported eating behaviour and body shape dissatisfaction, as well as with objectively assessed percent fat mass. The findings of the present study suggest that heightened food cue-reactivity can be neurophysiologically measured, that different neural circuits are implicated in the pathogenesis of overweight and obesity, and that EEG techniques may serve useful in the identification of endophenotypic markers associated with an increased risk of externally mediated food consumption. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Using multisensory cues to facilitate air traffic management.

    Science.gov (United States)

    Ngo, Mary K; Pierce, Russell S; Spence, Charles

    2012-12-01

    In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.

  13. Vividness of Visual Imagery and Incidental Recall of Verbal Cues, When Phenomenological Availability Reflects Long-Term Memory Accessibility

    OpenAIRE

    D’Angiulli, Amedeo; Runge, Matthew; Faulkner, Andrew; Zakizadeh, Jila; Chan, Aldrich; Morcos, Selvana

    2013-01-01

    The relationship between vivid visual mental images and unexpected recall (incidental recall) was replicated, refined and extended. In Experiment 1, participants were asked to generate mental images from imagery-evoking verbal-cues (controlled on several verbal properties) and then, on a trial-by-trial basis, rate the vividness of their images; thirty minutes later, participants were surprised with a task requiring free recall of the cues. Higher vividness ratings predicted better incidental ...

  14. From foreground to background: how task-neutral context influences contextual cueing of visual search

    Directory of Open Access Journals (Sweden)

    Xuelian eZang

    2016-06-01

    Full Text Available Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang & Leung, 2005. Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003. In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1, on stereoscopically separated depth planes (Experiment 2, or spread over the entire display on the same depth plane (Experiment 3. Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90º or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.

  15. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    Science.gov (United States)

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530

  16. Information Literacy on the Web: How College Students Use Visual and Textual Cues to Assess Credibility on Health Websites

    Directory of Open Access Journals (Sweden)

    Katrina L. Pariera

    2012-12-01

    Full Text Available One of the most important literacy skills in today’s information society is the ability to determine the credibility of online information. Users sort through a staggering number of websites while discerning which will provide satisfactory information. In this study, 70 college students assessed the credibility of health websites with a low and high design quality, in either low or high credibility groups. The study’s purpose was to understand if students relied more on textual or visual cues in determining credibility, and to understand if this affected their recall of those cues later. The results indicate that when viewing a high credibility website, high design quality will bolster the credibility perception, but design quality will not compensate for a low credibility website. The recall test also indicated that credibility does impact the participants’ recall of visual and textual cues. Implications are discussed in light of the Elaboration Likelihood Model.

  17. PRISM, a Novel Visual Metaphor Measuring Personally Salient Appraisals, Attitudes and Decision-Making: Qualitative Evidence Synthesis.

    Directory of Open Access Journals (Sweden)

    Tom Sensky

    Full Text Available PRISM (the Pictorial Representation of Illness and Self Measure is a novel, simple visual instrument. Its utility was initially discovered serendipitously, but has been validated as a quantitative measure of suffering. Recently, new applications for different purposes, even in non-health settings, have encouraged further exploration of how PRISM works, and how it might be applied. This review will summarise the results to date from applications of PRISM and propose a generic conceptualisation of how PRISM works which is consistent with all these applications.A systematic review, in the form of a qualitative evidence synthesis, was carried out of all available published data on PRISM.Fifty-two publications were identified, with a total of 8254 participants. Facilitated by simple instructions, PRISM has been used with patient groups in a variety of settings and cultures. As a measure of suffering, PRISM has, with few exceptions, behaved as expected according to Eric Cassell's seminal conceptualisation of suffering. PRISM has also been used to assess beliefs about or attitudes to stressful working conditions, interpersonal relations, alcohol consumption, and suicide, amongst others.This review supports PRISM behaving as a visual metaphor of the relationship of objects (eg 'my illness' to a subject (eg 'myself' in a defined context (eg 'my life at the moment'. As a visual metaphor, it is quick to complete and yields personally salient information. PRISM is likely to have wide applications in assessing beliefs, attitudes, and decision-making, because of its properties, and because it yields both quantitative and qualitative data. In medicine, it can serve as a generic patient-reported outcome measure. It can serve as a tool for representational guidance, can be applied to developing strategies visually, and is likely to have applications in coaching, psychological assessment and therapeutic interventions.

  18. Limits on the role of retrieval cues in memory for actions: enactment effects in the absence of object cues in the environment.

    Science.gov (United States)

    Steffens, Melanie C; Buchner, Axel; Wender, Karl F; Decker, Claudia

    2007-12-01

    Verb-object phrases (open the umbrella, knock on the table) are usually remembered better if they have been enacted during study (also called subject-performed tasks) than if they have merely been learned verbally (verbal tasks). This enactment effect is particularly pronounced for phrases for which the objects (table) are present as cues in the study and test contexts. In previous studies with retrieval cues for some phrases, the enactment effect in free recall for the other phrases has been surprisingly small or even nonexistent. The present study tested whether the often replicated enactment effect in free recall can be found if none of the phrases contains context cues. In Experiment 1, we tested, and corroborated, the suppression hypothesis: The enactment effect for a given type of phrase (marker phrases) is modified by the presence or absence of cues for the other phrases in the list (experimental phrases). Experiments 2 and 3 replicated the enactment effect for phrases without cues. Experiment 2 also showed that the presence of cues either at study or at test is sufficient for obtaining a suppression effect, and Experiment 3 showed that the enactment effect may disappear altogether if retrieval cues are very salient.

  19. Vividness of visual imagery and incidental recall of verbal cues, when phenomenological availability reflects long-term memory accessibility

    Directory of Open Access Journals (Sweden)

    Amedeo eD'Angiulli

    2013-02-01

    Full Text Available The relationship between vivid visual mental images and unexpected recall (incidental recall was replicated, refined and extended. In Experiment 1, participants were asked to generate mental images from imagery-evoking verbal-cues (controlled on several verbal properties and then, on a trial-by-trial basis, rate the vividness of their images; thirty minutes later, participants were surprised with a task requiring free recall of the cues. Higher vividness ratings predicted better incidental recall of the cues than individual differences (whose effect was modest. Distributional analysis of image latencies through ex-Gaussian modeling showed an inverse relation between vividness and latency. However, recall was unrelated to image latency. The follow-up Experiment 2 showed that the processes underlying trial-by-trial vividness ratings are unrelated to the Vividness of Visual Imagery Questionnaire (VVIQ, as further supported by a meta-analysis of a randomly selected sample of relevant literature. The present findings suggest that vividness may act as an index of availability of long-term sensory traces, playing a non-epiphenomenal role in facilitating the access of those memories.

  20. Vividness of visual imagery and incidental recall of verbal cues, when phenomenological availability reflects long-term memory accessibility.

    Science.gov (United States)

    D'Angiulli, Amedeo; Runge, Matthew; Faulkner, Andrew; Zakizadeh, Jila; Chan, Aldrich; Morcos, Selvana

    2013-01-01

    The relationship between vivid visual mental images and unexpected recall (incidental recall) was replicated, refined, and extended. In Experiment 1, participants were asked to generate mental images from imagery-evoking verbal cues (controlled on several verbal properties) and then, on a trial-by-trial basis, rate the vividness of their images; 30 min later, participants were surprised with a task requiring free recall of the cues. Higher vividness ratings predicted better incidental recall of the cues than individual differences (whose effect was modest). Distributional analysis of image latencies through ex-Gaussian modeling showed an inverse relation between vividness and latency. However, recall was unrelated to image latency. The follow-up Experiment 2 showed that the processes underlying trial-by-trial vividness ratings are unrelated to the Vividness of Visual Imagery Questionnaire (VVIQ), as further supported by a meta-analysis of a randomly selected sample of relevant literature. The present findings suggest that vividness may act as an index of availability of long-term sensory traces, playing a non-epiphenomenal role in facilitating the access of those memories.

  1. Contextual cueing of pop-out visual search: when context guides the deployment of attention.

    Science.gov (United States)

    Geyer, Thomas; Zehetleitner, Michael; Müller, Hermann J

    2010-05-01

    Visual context information can guide attention in demanding (i.e., inefficient) search tasks. When participants are repeatedly presented with identically arranged ('repeated') displays, reaction times are faster relative to newly composed ('non-repeated') displays. The present article examines whether this 'contextual cueing' effect operates also in simple (i.e., efficient) search tasks and if so, whether there it influences target, rather than response, selection. The results were that singleton-feature targets were detected faster when the search items were presented in repeated, rather than non-repeated, arrangements. Importantly, repeated, relative to novel, displays also led to an increase in signal detection accuracy. Thus, contextual cueing can expedite the selection of pop-out targets, most likely by enhancing feature contrast signals at the overall-salience computation stage.

  2. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment.

    Science.gov (United States)

    Gong, Tao; Lam, Yau W; Shuai, Lan

    2016-01-01

    Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages.

  3. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment

    Science.gov (United States)

    Gong, Tao; Lam, Yau W.; Shuai, Lan

    2016-01-01

    Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages. PMID:28066281

  4. Imaging When Acting: Picture but Not Word Cues Induce Action-Related Biases of Visual Attention

    Science.gov (United States)

    Wykowska, Agnieszka; Hommel, Bernhard; Schubö, Anna

    2012-01-01

    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters. PMID:23087656

  5. Imaging when acting: picture but not word cues induce action-related biases of visual attention.

    Science.gov (United States)

    Wykowska, Agnieszka; Hommel, Bernhard; Schubö, Anna

    2012-01-01

    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing - an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters.

  6. California scrub-jays reduce visual cues available to potential pilferers by matching food colour to caching substrate.

    Science.gov (United States)

    Kelley, Laura A; Clayton, Nicola S

    2017-07-01

    Some animals hide food to consume later; however, these caches are susceptible to theft by conspecifics and heterospecifics. Caching animals can use protective strategies to minimize sensory cues available to potential pilferers, such as caching in shaded areas and in quiet substrate. Background matching (where object patterning matches the visual background) is commonly seen in prey animals to reduce conspicuousness, and caching animals may also use this tactic to hide caches, for example, by hiding coloured food in a similar coloured substrate. We tested whether California scrub-jays ( Aphelocoma californica ) camouflage their food in this way by offering them caching substrates that either matched or did not match the colour of food available for caching. We also determined whether this caching behaviour was sensitive to social context by allowing the birds to cache when a conspecific potential pilferer could be both heard and seen (acoustic and visual cues present), or unseen (acoustic cues only). When caching events could be both heard and seen by a potential pilferer, birds cached randomly in matching and non-matching substrates. However, they preferentially hid food in the substrate that matched the food colour when only acoustic cues were present. This is a novel cache protection strategy that also appears to be sensitive to social context. We conclude that studies of cache protection strategies should consider the perceptual capabilities of the cacher and potential pilferers. © 2017 The Author(s).

  7. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4.

    Science.gov (United States)

    Braun, J

    1994-02-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.

  8. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    Science.gov (United States)

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  9. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    Directory of Open Access Journals (Sweden)

    Chie Takahashi

    2011-10-01

    Full Text Available Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009. Variations in tool geometry also affect the reliability (precision of haptic size estimates, however, because they alter the change in hand opening caused by a given change in object size. Here, we examine whether the brain appropriately adjusts the weights given to visual and haptic size signals when tool geometry changes. We first estimated each cue's reliability by measuring size-discrimination thresholds in vision-alone and haptics-alone conditions. We varied haptic reliability using tools with different object-size:hand-opening ratios (1:1, 0.7:1, and 1.4:1. We then measured the weights given to vision and haptics with each tool, using a cue-conflict paradigm. The weight given to haptics varied with tool type in a manner that was well predicted by the single-cue reliabilities (MLE model; Ernst and Banks, 2002. This suggests that the process of visual-haptic integration appropriately accounts for variations in haptic reliability introduced by different tool geometries.

  10. Analysis of Parallel and Transverse Visual Cues on the Gait of Individuals with Idiopathic Parkinson's Disease

    Science.gov (United States)

    de Melo Roiz, Roberta; Azevedo Cacho, Enio Walker; Cliquet, Alberto, Jr.; Barasnevicius Quagliato, Elizabeth Maria Aparecida

    2011-01-01

    Idiopathic Parkinson's disease (IPD) has been defined as a chronic progressive neurological disorder with characteristics that generate changes in gait pattern. Several studies have reported that appropriate external influences, such as visual or auditory cues may improve the gait pattern of patients with IPD. Therefore, the objective of this…

  11. The time-course of activation in the dorsal and ventral visual streams during landmark cueing and perceptual discrimination tasks.

    Science.gov (United States)

    Lambert, Anthony J; Wootton, Adrienne

    2017-08-01

    Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. "On" freezing in Parkinson's disease: resistance to visual cue walking devices.

    Science.gov (United States)

    Kompoliti, K; Goetz, C G; Leurgans, S; Morrissey, M; Siegel, I M

    2000-03-01

    To measure "on" freezing during unassisted walking (UW) and test if two devices, a modified inverted stick (MIS) and a visual laser beam stick (LBS) improved walking speed and number of "on" freezing episodes in patients with Parkinson's disease (PD). Multiple visual cues can overcome "off' freezing episodes and can be useful in improving gait function in parkinsonian patients. These devices have not been specifically tested in "on" freezing, which is unresponsive to pharmacologic manipulations. Patients with PD, motor fluctuations and freezing while "on," attempted walking on a 60-ft track with each of three walking conditions in a randomized order: UW, MIS, and LBS. Total time to complete a trial, number of freezes, and the ratio of walking time to the number of freezes were compared using Friedman's test. Twenty-eight patients with PD, mean age 67.81 years (standard deviation [SD] 7.54), mean disease duration 13.04 years (SD 7.49), and mean motor Unified Parkinson's Disease Rating Scale score "on" 32.59 (SD 10.93), participated in the study. There was a statistically significant correlation of time needed to complete a trial and number of freezes for all three conditions (Spearman correlations: UW 0.973, LBS 0.0.930, and MIS 0.842). The median number of freezes, median time to walk in each condition, and median walking time per freeze were not significantly different in pairwise comparisons of the three conditions (Friedman's test). Of the 28 subjects, six showed improvement with the MIS and six with the LBS in at least one outcome measure. Assisting devices, specifically based on visual cues, are not consistently beneficial in overcoming "on" freezing in most patients with PD. Because this is an otherwise untreatable clinical problem and because occasional subjects do respond, cautious trials of such devices under the supervision of a health professional should be conducted to identify those patients who might benefit from their long-term use.

  13. The Effects of Spatial Endogenous Pre-cueing across Eccentricities.

    Science.gov (United States)

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across

  14. The Effects of Spatial Endogenous Pre-cueing across Eccentricities

    Directory of Open Access Journals (Sweden)

    Jing Feng

    2017-06-01

    Full Text Available Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display. Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining

  15. The effect of contextual cues on the encoding of motor memories.

    Science.gov (United States)

    Howard, Ian S; Wolpert, Daniel M; Franklin, David W

    2013-05-01

    Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.

  16. When message-frame fits salient cultural-frame, messages feel more persuasive.

    Science.gov (United States)

    Uskul, Ayse K; Oyserman, Daphna

    2010-03-01

    The present study examines the persuasive effects of tailored health messages comparing those tailored to match (versus not match) both chronic cultural frame and momentarily salient cultural frame. Evidence from two studies (Study 1: n = 72 European Americans; Study 2: n = 48 Asian Americans) supports the hypothesis that message persuasiveness increases when chronic cultural frame, health message tailoring and momentarily salient cultural frame all match. The hypothesis was tested using a message about health risks of caffeine consumption among individuals prescreened to be regular caffeine consumers. After being primed for individualism, European Americans who read a health message that focused on the personal self were more likely to accept the message-they found it more persuasive, believed they were more at risk and engaged in more message-congruent behaviour. These effects were also found among Asian Americans who were primed for collectivism and who read a health message that focused on relational obligations. The findings point to the importance of investigating the role of situational cues in persuasive effects of health messages and suggest that matching content to primed frame consistent with the chronic frame may be a way to know what to match messages to.

  17. Global Repetition Influences Contextual Cueing

    Science.gov (United States)

    Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Li, Hong

    2018-01-01

    Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1–4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect. PMID:29636716

  18. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity

    Science.gov (United States)

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739

  19. THE EFFECT OF INTIMACY AND STATUS DISCREPANCY ON SALIENT AND NON-SALIENT CONFLICT STRATEGIES OF JAPANESE.

    Science.gov (United States)

    Nakatsugawa, Satomi; Takai, Jiro

    2015-10-01

    It has been claimed that Japanese people prefer passive forms of conflict strategies to preserve interpersonal harmony. This study aimed to identify some conditions in which such passive strategies are used. The effects of target intimacy and status discrepancy on the intent and use of salient and non-salient conflict strategies were examined, along with respondent sex differences. Questionnaires were collected from 205 Japanese university students. Results indicated that women were more likely to have non-salient intents than men and that intimacy affected considerateness intent but not avoidance intent. Active non-salient strategy was affected by status while passive non-salient strategy was affected by intimacy. Overall, target characteristics proved to be a strong factor in the intents and strategies employed in conflict situations of Japanese.

  20. Moving in Dim Light: Behavioral and Visual Adaptations in Nocturnal Ants.

    Science.gov (United States)

    Narendra, Ajay; Kamhi, J Frances; Ogawa, Yuri

    2017-11-01

    Visual navigation is a benchmark information processing task that can be used to identify the consequence of being active in dim-light environments. Visual navigational information that animals use during the day includes celestial cues such as the sun or the pattern of polarized skylight and terrestrial cues such as the entire panorama, canopy pattern, or significant salient features in the landscape. At night, some of these navigational cues are either unavailable or are significantly dimmer or less conspicuous than during the day. Even under these circumstances, animals navigate between locations of importance. Ants are a tractable system for studying navigation during day and night because the fine scale movement of individual animals can be recorded in high spatial and temporal detail. Ant species range from being strictly diurnal, crepuscular, and nocturnal. In addition, a number of species have the ability to change from a day- to a night-active lifestyle owing to environmental demands. Ants also offer an opportunity to identify the evolution of sensory structures for discrete temporal niches not only between species but also within a single species. Their unique caste system with an exclusive pedestrian mode of locomotion in workers and an exclusive life on the wing in males allows us to disentangle sensory adaptations that cater for different lifestyles. In this article, we review the visual navigational abilities of nocturnal ants and identify the optical and physiological adaptations they have evolved for being efficient visual navigators in dim-light. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  1. Multiperson visual focus of attention from head pose and meeting contextual cues.

    Science.gov (United States)

    Ba, Sileye O; Odobez, Jean-Marc

    2011-01-01

    This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specifically, instead of independently recognizing the VFOA of each meeting participant from his own head pose, we propose to jointly recognize the participants' visual attention in order to introduce context-dependent interaction models that relate to group activity and the social dynamics of communication. Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable. By modeling the interactions between the different contexts and their combined and sometimes contradictory impact on the gazing behavior, our model allows us to handle VFOA recognition in difficult task-based meetings involving artifacts, presentations, and moving people. We validated our model through rigorous evaluation on a publicly available and challenging data set of 12 real meetings (5 hours of data). The results demonstrated that the integration of the presentation and conversation dynamical context using our model can lead to significant performance improvements.

  2. Context generalization in Drosophila visual learning requires the mushroom bodies

    Science.gov (United States)

    Liu, Li; Wolf, Reinhard; Ernst, Roman; Heisenberg, Martin

    1999-08-01

    The world is permanently changing. Laboratory experiments on learning and memory normally minimize this feature of reality, keeping all conditions except the conditioned and unconditioned stimuli as constant as possible. In the real world, however, animals need to extract from the universe of sensory signals the actual predictors of salient events by separating them from non-predictive stimuli (context). In principle, this can be achieved ifonly those sensory inputs that resemble the reinforcer in theirtemporal structure are taken as predictors. Here we study visual learning in the fly Drosophila melanogaster, using a flight simulator,, and show that memory retrieval is, indeed, partially context-independent. Moreover, we show that the mushroom bodies, which are required for olfactory but not visual or tactile learning, effectively support context generalization. In visual learning in Drosophila, it appears that a facilitating effect of context cues for memory retrieval is the default state, whereas making recall context-independent requires additional processing.

  3. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.

    Science.gov (United States)

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-15

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.

  4. The effects of overfeeding on the neuronal response to visual food cues in thin and reduced-obese individuals.

    Directory of Open Access Journals (Sweden)

    Marc-Andre Cornier

    2009-07-01

    Full Text Available The regulation of energy intake is a complex process involving the integration of homeostatic signals and both internal and external sensory inputs. The objective of this study was to examine the effects of short-term overfeeding on the neuronal response to food-related visual stimuli in individuals prone and resistant to weight gain.22 thin and 19 reduced-obese (RO individuals were studied. Functional magnetic resonance imaging (fMRI was performed in the fasted state after two days of eucaloric energy intake and after two days of 30% overfeeding in a counterbalanced design. fMRI was performed while subjects viewed images of foods of high hedonic value and neutral non-food objects. In the eucaloric state, food as compared to non-food images elicited significantly greater activation of insula and inferior visual cortex in thin as compared to RO individuals. Two days of overfeeding led to significant attenuation of not only insula and visual cortex responses but also of hypothalamus response in thin as compared to RO individuals.These findings emphasize the important role of food-related visual cues in ingestive behavior and suggest that there are important phenotypic differences in the interactions between external visual sensory inputs, energy balance status, and brain regions involved in the regulation of energy intake. Furthermore, alterations in the neuronal response to food cues may relate to the propensity to gain weight.

  5. People, clothing, music, and arousal as contextual retrieval cues in verbal memory.

    Science.gov (United States)

    Standing, Lionel G; Bobbitt, Kristin E; Boisvert, Kathryn L; Dayholos, Kathy N; Gagnon, Anne M

    2008-10-01

    Four experiments (N = 164) on context-dependent memory were performed to explore the effects on verbal memory of incidental cues during the test session which replicated specific features of the learning session. These features involved (1) bystanders, (2) the clothing of the experimenter, (3) background music, and (4) the arousal level of the subject. Social contextual cues (bystanders or experimenter clothing) improved verbal recall or recognition. However, recall decreased when the contextual cue was a different stimulus taken from the same conceptual category (piano music by Chopin) that was heard during learning. Memory was unaffected by congruent internal cues, produced by the same physiological arousal level (low, moderate, or high heart rate) during the learning and test sessions. However, recall increased with the level of arousal across the three congruent conditions. The results emphasize the effectiveness as retrieval cues of stimuli which are socially salient, concrete, and external.

  6. Estimation of salient regions related to chronic gastritis using gastric X-ray images.

    Science.gov (United States)

    Togo, Ren; Ishihara, Kenta; Ogawa, Takahiro; Haseyama, Miki

    2016-10-01

    Since technical knowledge and a high degree of experience are necessary for diagnosis of chronic gastritis, computer-aided diagnosis (CAD) systems that analyze gastric X-ray images are desirable in the field of medicine. Therefore, a new method that estimates salient regions related to chronic gastritis/non-gastritis for supporting diagnosis is presented in this paper. In order to estimate salient regions related to chronic gastritis/non-gastritis, the proposed method monitors the distance between a target image feature and Support Vector Machine (SVM)-based hyperplane for its classification. Furthermore, our method realizes removal of the influence of regions outside the stomach by using positional relationships between the stomach and other organs. Consequently, since the proposed method successfully estimates salient regions of gastric X-ray images for which chronic gastritis and non-gastritis are unknown, visual support for inexperienced clinicians becomes feasible. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Visual attention to food cues is differentially modulated by gustatory-hedonic and post-ingestive attributes.

    Science.gov (United States)

    Garcia-Burgos, David; Lao, Junpeng; Munsch, Simone; Caldara, Roberto

    2017-07-01

    Although attentional biases towards food cues may play a critical role in food choices and eating behaviours, it remains largely unexplored which specific food attribute governs visual attentional deployment. The allocation of visual attention might be modulated by anticipatory postingestive consequences, from taste sensations derived from eating itself, or both. Therefore, in order to obtain a comprehensive understanding of the attentional mechanisms involved in the processing of food-related cues, we recorded the eye movements to five categories of well-standardised pictures: neutral non-food, high-calorie, good taste, distaste and dangerous food. In particular, forty-four healthy adults of both sexes were assessed with an antisaccade paradigm (which requires the generation of a voluntary saccade and the suppression of a reflex one) and a free viewing paradigm (which implies the free visual exploration of two images). The results showed that observers directed their initial fixations more often and faster on items with high survival relevance such as nutrient and possible dangers; although an increase in antisaccade error rates was only detected for high-calorie items. We also found longer prosaccade fixation duration and initial fixation duration bias score related to maintained attention towards high-calorie, good taste and danger categories; while shorter reaction times to correct an incorrect prosaccade related to less difficulties in inhibiting distasteful images. Altogether, these findings suggest that visual attention is differentially modulated by both the accepted and rejected food attributes, but also that normal-weight, non-eating disordered individuals exhibit enhanced approach to food's postingestive effects and avoidance of distasteful items (such as bitter vegetables or pungent products). Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Wild, free-living rufous hummingbirds do not use geometric cues in a spatial task.

    Science.gov (United States)

    Hornsby, Mark A W; Hurly, T Andrew; Hamilton, Caitlin E; Pritchard, David J; Healy, Susan D

    2014-10-01

    In the laboratory, many species orient themselves using the geometric properties of an enclosure or array and geometric information is often preferred over visual cues. Whether animals use geometric cues when relocating rewarded locations in the wild, however, has rarely been investigated. We presented free-living rufous hummingbirds with a rectangular array of four artificial flowers to investigate learning of rewarded locations using geometric cues. In one treatment, we rewarded two of four flowers at diagonally opposite corners. In a second treatment, we provided a visual cue to the rewarded flower by connecting the flowers with "walls" consisting of four dowels (three white, one blue) laid on the ground connecting each of the flowers. Neither treatment elicited classical geometry results; instead, hummingbirds typically chose one particular flower over all others. When we exchanged that flower with another, hummingbirds tended to visit the original flower. These results suggest that (1) hummingbirds did not use geometric cues, but instead may have used a visually derived cue on the flowers themselves, and (2) using geometric cues may have been more difficult than using visual characteristics. Although hummingbirds typically prefer spatial over visual information, we hypothesize that they will not use geometric cues over stable visual features but that they make use of small, flower-specific visual cues. Such cues may play a more important role in foraging decisions than previously thought. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. The impact of different perceptual cues on fear and presence in virtual reality.

    Science.gov (United States)

    Peperkorn, Henrik M; Mühlberger, Andreas

    2013-01-01

    The impact of perceptual visual cues on spider phobic reactions has been thoroughly investigated over the last years. Although the fear of being touched by a spider is part of the clinical picture of spider phobia, findings on the impact of tactile fear cues are rare. This study uses virtual reality to selectively apply visual and tactile fear cues. Self-reported fear and the experience of presence in VR were measured in 20 phobic and 20 non-phobic participants. All participants were repeatedly exposed to visual cues, tactile cues, the combination of both and no fear relevant perceptual cues. Participants were exposed in each condition for five times in random order. Results show that tactile fear cues have the power to trigger fear independent of visual cues. Participants experienced highest levels of presence in the combined and the control condition. Presence may not only be seen in association with the emotional impact of specific cues in VR but also appears to depend on the comparability of a virtual environment to a real life situation.

  10. Prey capture behaviour evoked by simple visual stimuli in larval zebrafish

    Directory of Open Access Journals (Sweden)

    Isaac Henry Bianco

    2011-12-01

    Full Text Available Understanding how the nervous system recognises salient stimuli in the environ- ment and selects and executes the appropriate behavioural responses is a fundamen- tal question in systems neuroscience. To facilitate the neuroethological study of visually-guided behaviour in larval zebrafish, we developed virtual reality assays in which precisely controlled visual cues can be presented to larvae whilst their behaviour is automatically monitored using machine-vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼ 20◦ towards small moving spots (1◦ but reacted to larger spots (10◦ with high-amplitude aversive turns (∼ 60◦. The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analysing movie sequences of larvae hunting parame- cia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behaviour in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey.

  11. A novel experimental method for measuring vergence and accommodation responses to the main near visual cues in typical and atypical groups.

    Science.gov (United States)

    Horwood, Anna M; Riddell, Patricia M

    2009-01-01

    Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.

  12. ASSESSMENT OF ATTENTION THRESHOLD IN RATS BY TITRATION OF VISUAL CUE DURATION DURING THE FIVE CHOICE SERIAL REACTION TIME TASK

    Science.gov (United States)

    Martin, Thomas J.; Grigg, Amanda; Kim, Susy A.; Ririe, Douglas G.; Eisenach, James C.

    2014-01-01

    Background The 5 choice serial reaction time task (5CSRTT) is commonly used to assess attention in rodents. We sought to develop a variant of the 5CSRTT that would speed training to objective success criteria, and to test whether this variant could determine attention capability in each subject. New Method Fisher 344 rats were trained to perform a variant of the 5CSRTT in which the duration of visual cue presentation (cue duration) was titrated between trials based upon performance. The cue duration was decreased when the subject made a correct response, or increased with incorrect responses or omissions. Additionally, test day challenges were provided consisting of lengthening the intertrial interval and inclusion of a visual distracting stimulus. Results Rats readily titrated the cue duration to less than 1 sec in 25 training sessions or less (mean ± SEM, 22.9 ± 0.7), and the median cue duration (MCD) was calculated as a measure of attention threshold. Increasing the intertrial interval increased premature responses, decreased the number of trials completed, and increased the MCD. Decreasing the intertrial interval and time allotted for consuming the food reward demonstrated that a minimum of 3.5 sec is required for rats to consume two food pellets and successfully attend to the next trial. Visual distraction in the form of a 3 Hz flashing light increased the MCD and both premature and time out responses. Comparison with existing method The titration variant of the 5CSRTT is a useful method that dynamically measures attention threshold across a wide range of subject performance, and significantly decreases the time required for training. Task challenges produce similar effects in the titration method as reported for the classical procedure. Conclusions The titration 5CSRTT method is an efficient training procedure for assessing attention and can be utilized to assess the limit in performance ability across subjects and various schedule manipulations. PMID

  13. Self-Control and Impulsiveness in Nondieting Adult Human Females: Effects of Visual Food Cues and Food Deprivation

    Science.gov (United States)

    Forzano, Lori-Ann B.; Chelonis, John J.; Casey, Caitlin; Forward, Marion; Stachowiak, Jacqueline A.; Wood, Jennifer

    2010-01-01

    Self-control can be defined as the choice of a larger, more delayed reinforcer over a smaller, less delayed reinforcer, and impulsiveness as the opposite. Previous research suggests that exposure to visual food cues affects adult humans' self-control. Previous research also suggests that food deprivation decreases adult humans' self-control. The…

  14. Reward reduces conflict by enhancing attentional control and biasing visual cortical processing.

    Science.gov (United States)

    Padmala, Srikanth; Pessoa, Luiz

    2011-11-01

    How does motivation interact with cognitive control during challenging behavioral conditions? Here, we investigated the interactions between motivation and cognition during a response conflict task and tested a specific model of the effect of reward on cognitive processing. Behaviorally, participants exhibited reduced conflict during the reward versus no-reward condition. Brain imaging results revealed that a group of subcortical and fronto-parietal regions was robustly influenced by reward at cue processing and, importantly, that cue-related responses in fronto-parietal attentional regions were predictive of reduced conflict-related signals in the medial pFC (MPFC)/ACC during the upcoming target phase. Path analysis revealed that the relationship between cue responses in the right intraparietal sulcus (IPS) and interference-related responses in the MPFC during the subsequent target phase was mediated via signals in the left fusiform gyrus, which we linked to distractor-related processing. Finally, reward increased functional connectivity between the right IPS and both bilateral putamen and bilateral nucleus accumbens during the cue phase, a relationship that covaried with across-individual sensitivity to reward in the case of the right nucleus accumbens. Taken together, our findings are consistent with a model in which motivationally salient cues are employed to upregulate top-down control processes that bias the selection of visual information, thereby leading to more efficient stimulus processing during conflict conditions.

  15. Contextual cueing impairment in patients with age-related macular degeneration.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan

    2013-09-12

    Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.

  16. Grasp cueing and joint attention.

    Science.gov (United States)

    Tschentscher, Nadja; Fischer, Martin H

    2008-10-01

    We studied how two different hand posture cues affect joint attention in normal observers. Visual targets appeared over lateralized objects, with different delays after centrally presented hand postures. Attention was cued by either hand direction or the congruency between hand aperture and object size. Participants pressed a button when they detected a target. Direction cues alone facilitated target detection following short delays but aperture cues alone were ineffective. In contrast, when hand postures combined direction and aperture cues, aperture congruency effects without directional congruency effects emerged and persisted, but only for power grips. These results suggest that parallel parameter specification makes joint attention mechanisms exquisitely sensitive to the timing and content of contextual cues.

  17. Visual cues for the retrieval of landmark memories by navigating wood ants.

    Science.gov (United States)

    Harris, Robert A; Graham, Paul; Collett, Thomas S

    2007-01-23

    Even on short routes, ants can be guided by multiple visual memories. We investigate here the cues controlling memory retrieval as wood ants approach a one- or two-edged landmark to collect sucrose at a point along its base. In such tasks, ants store the desired retinal position of landmark edges at several points along their route. They guide subsequent trips by retrieving the appropriate memory and moving to bring the edges in the scene toward the stored positions. The apparent width of the landmark turns out to be a powerful cue for retrieving the desired retinal position of a landmark edge. Two other potential cues, the landmark's apparent height and the distance that the ant walks, have little effect on memory retrieval. A simple model encapsulates these conclusions and reproduces the ants' routes in several conditions. According to this model, the ant stores a look-up table. Each entry contains the apparent width of the landmark and the desired retinal position of vertical edges. The currently perceived width provides an index for retrieving the associated stored edge positions. The model accounts for the population behavior of ants and the idiosyncratic training routes of individual ants. Our results imply binding between the edge of a shape and its width and, further, imply that assessing the width of a shape does not depend on the presence of any particular local feature, such as a landmark edge. This property makes the ant's retrieval and guidance system relatively robust to edge occlusions.

  18. Task-relevant information is prioritized in spatiotemporal contextual cueing.

    Science.gov (United States)

    Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun

    2016-11-01

    Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.

  19. Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.

    Science.gov (United States)

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2014-04-01

    Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Occlusion edge blur: A cue to relative visual depth

    OpenAIRE

    Marshall, J.A.; Burbeck, C.A.; Ariely, D.; Rolland, J.P.; Martin, K.E.

    1996-01-01

    We studied whether the blur/sharpness of an occlusion boundary between a sharply focused surface and a blurred surface is used as a relative depth cue. Observers judged relative depth in pairs of images that differed only in the blurriness of the common boundary between two adjoining texture regions, one blurred and one sharply focused. Two experiments were conducted; in both, observers consistently used the blur of the boundary as a cue to relative depth. However, the strength of the cue, re...

  1. The Effects of Visual Imagery and Keyword Cues on Third-Grade Readers' Memory, Comprehension, and Vocabulary Knowledge

    Science.gov (United States)

    Brooker, Heather Rogers

    2013-01-01

    It is estimated that nearly 70% of high school students in the United States need some form of reading remediation, with the most common need being the ability to comprehend the content and significance of the text (Biancarosa & Snow, 2004). Research findings support the use of visual imagery and keyword cues as effective comprehension…

  2. Viewpoint-independent contextual cueing effect

    Directory of Open Access Journals (Sweden)

    taiga etsuchiai

    2012-06-01

    Full Text Available We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We compared the transfer of the contextual cueing effect between cases with and without self-motion by using visual search displays for 3D objects, which changed according to the participant’s assumed location for viewing the stimuli. The contextual cueing effect was obtained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in spatial coordinates and suggests that the spatial representation of object layouts or scenes can be obtained and updated implicitly. We also showed that binocular disparity play an important role in the layout representations.

  3. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing.

    Directory of Open Access Journals (Sweden)

    Rebecca E Paladini

    Full Text Available Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory, may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition, spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants' accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants' performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when

  4. The Effects of Visual Cues and Learners' Field Dependence in Multiple External Representations Environment for Novice Program Comprehension

    Science.gov (United States)

    Wei, Liew Tze; Sazilah, Salam

    2012-01-01

    This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…

  5. Priming and the guidance by visual and categorical templates in visual search

    NARCIS (Netherlands)

    Wilschut, A.M.; Theeuwes, J.; Olivers, C.N.L.

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual

  6. Feasibility and Preliminary Efficacy of Visual Cue Training to Improve Adaptability of Walking after Stroke: Multi-Centre, Single-Blind Randomised Control Pilot Trial

    Science.gov (United States)

    Hollands, Kristen L.; Pelton, Trudy A.; Wimperis, Andrew; Whitham, Diane; Tan, Wei; Jowett, Sue; Sackley, Catherine M.; Wing, Alan M.; Tyson, Sarah F.; Mathias, Jonathan; Hensman, Marianne; van Vliet, Paulette M.

    2015-01-01

    Objectives Given the importance of vision in the control of walking and evidence indicating varied practice of walking improves mobility outcomes, this study sought to examine the feasibility and preliminary efficacy of varied walking practice in response to visual cues, for the rehabilitation of walking following stroke. Design This 3 arm parallel, multi-centre, assessor blind, randomised control trial was conducted within outpatient neurorehabilitation services Participants Community dwelling stroke survivors with walking speed adaptability practice using visual cues are feasible and may improve mobility and balance. Future studies should continue a carefully phased approach using identified methods to improve retention. Trial Registration Clinicaltrials.gov NCT01600391 PMID:26445137

  7. Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load

    Science.gov (United States)

    Santangelo, Valerio; Spence, Charles

    2007-01-01

    We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…

  8. The Effect of Retrieval Cues on Visual Preferences and Memory in Infancy: Evidence for a Four-Phase Attention Function.

    Science.gov (United States)

    Bahrick, Lorraine E.; Hernandez-Reif, Maria; Pickens, Jeffrey N.

    1997-01-01

    Tested hypothesis from Bahrick and Pickens' infant attention model that retrieval cues increase memory accessibility and shift visual preferences toward greater novelty to resemble recent memories. Found that after retention intervals associated with remote or intermediate memory, previous familiarity preferences shifted to null or novelty…

  9. The Effects of Visual Beats on Prosodic Prominence: Acoustic Analyses, Auditory Perception and Visual Perception

    Science.gov (United States)

    Krahmer, Emiel; Swerts, Marc

    2007-01-01

    Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…

  10. Action experience changes attention to kinematic cues

    Directory of Open Access Journals (Sweden)

    Courtney eFilippi

    2016-02-01

    Full Text Available The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-month-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue or did not match the orientation of the rod (incongruent cue. To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first x 2 (congruent kinematic cue vs. incongruent kinematic cue between-subjects design. We show that 13-month-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics.

  11. The effect of offset cues on saccade programming and covert attention.

    Science.gov (United States)

    Smith, Daniel T; Casteau, Soazig

    2018-02-01

    Salient peripheral events trigger fast, "exogenous" covert orienting. The influential premotor theory of attention argues that covert orienting of attention depends upon planned but unexecuted eye-movements. One problem with this theory is that salient peripheral events, such as offsets, appear to summon attention when used to measure covert attention (e.g., the Posner cueing task) but appear not to elicit oculomotor preparation in tasks that require overt orienting (e.g., the remote distractor paradigm). Here, we examined the effects of peripheral offsets on covert attention and saccade preparation. Experiment 1 suggested that transient offsets summoned attention in a manual detection task without triggering motor preparation planning in a saccadic localisation task, although there were a high proportion of saccadic capture errors on "no-target" trials, where a cue was presented but no target appeared. In Experiment 2, "no-target" trials were removed. Here, transient offsets produced both attentional facilitation and faster saccadic responses on valid cue trials. A third experiment showed that the permanent disappearance of an object also elicited attentional facilitation and faster saccadic reaction times. These experiments demonstrate that offsets trigger both saccade programming and covert attentional orienting, consistent with the idea that exogenous, covert orienting is tightly coupled with oculomotor activation. The finding that no-go trials attenuates oculomotor priming effects offers a way to reconcile the current findings with previous claims of a dissociation between covert attention and oculomotor control in paradigms that utilise a high proportion of catch trials.

  12. Laserlight cues for gait freezing in Parkinson's disease: an open-label study.

    Science.gov (United States)

    Donovan, S; Lim, C; Diaz, N; Browner, N; Rose, P; Sudarsky, L R; Tarsy, D; Fahn, S; Simon, D K

    2011-05-01

    Freezing of gait (FOG) and falls are major sources of disability for Parkinson's disease (PD) patients, and show limited responsiveness to medications. We assessed the efficacy of visual cues for overcoming FOG in an open-label study of 26 patients with PD. The change in the frequency of falls was a secondary outcome measure. Subjects underwent a 1-2 month baseline period of use of a cane or walker without visual cues, followed by 1 month using the same device with the laserlight visual cue. The laserlight visual cue was associated with a modest but significant mean reduction in FOG Questionnaire (FOGQ) scores of 1.25 ± 0.48 (p = 0.0152, two-tailed paired t-test), representing a 6.6% improvement compared to the mean baseline FOGQ scores of 18.8. The mean reduction in fall frequency was 39.5 ± 9.3% with the laserlight visual cue among subjects experiencing at least one fall during the baseline and subsequent study periods (p = 0.002; two-tailed one-sample t-test with hypothesized mean of 0). Though some individual subjects may have benefited, the overall mean performance on the timed gait test (TGT) across all subjects did not significantly change. However, among the 4 subjects who underwent repeated testing of the TGT, one showed a 50% mean improvement in TGT performance with the laserlight visual cue (p = 0.005; two-tailed paired t-test). This open-label study provides evidence for modest efficacy of a laserlight visual cue in overcoming FOG and reducing falls in PD patients. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Visual motion-sensitive neurons in the bumblebee brain convey information about landmarks during a navigational task

    Directory of Open Access Journals (Sweden)

    Marcel eMertes

    2014-09-01

    Full Text Available Bees use visual memories to find the spatial location of previously learnt food sites. Characteristic learning flights help acquiring these memories at newly discovered foraging locations where landmarks - salient objects in the vicinity of the goal location - can play an important role in guiding the animal’s homing behavior. Although behavioral experiments have shown that bees can use a variety of visual cues to distinguish objects as landmarks, the question of how landmark features are encoded by the visual system is still open. Recently, it could be shown that motion cues are sufficient to allow bees localizing their goal using landmarks that can hardly be discriminated from the background texture. Here, we tested the hypothesis that motion sensitive neurons in the bee’s visual pathway provide information about such landmarks during a learning flight and might, thus, play a role for goal localization. We tracked learning flights of free-flying bumblebees (Bombus terrestris in an arena with distinct visual landmarks, reconstructed the visual input during these flights, and replayed ego-perspective movies to tethered bumblebees while recording the activity of direction-selective wide-field neurons in their optic lobe. By comparing neuronal responses during a typical learning flight and targeted modifications of landmark properties in this movie we demonstrate that these objects are indeed represented in the bee’s visual motion pathway. We find that object-induced responses vary little with object texture, which is in agreement with behavioral evidence. These neurons thus convey information about landmark properties that are useful for view-based homing.

  14. Craving Responses to Methamphetamine and Sexual Visual Cues in Individuals With Methamphetamine Use Disorder After Long-Term Drug Rehabilitation

    Directory of Open Access Journals (Sweden)

    Shucai Huang

    2018-04-01

    Full Text Available Studies utilizing functional magnetic resonance imaging (fMRI cue-reactivity paradigms have demonstrated that short-term abstinent or current methamphetamine (MA users have increased brain activity in the ventral striatum, caudate nucleus and medial frontal cortex, when exposed to MA-related visual cues. However, patterns of brain activity following cue-reactivity in subjects with long-term MA abstinence, especially long-term compulsory drug rehabilitation, have not been well studied. To enrich knowledge in this field, functional brain imaging was conducted during a cue-reactivity paradigm task in 28 individuals with MA use disorder following long-term compulsory drug rehabilitation, and 27 healthy control subjects. The results showed that, when compared with controls, individuals with MA use disorder displayed elevated activity in the bilateral medial prefrontal cortex (mPFC and right lateral posterior cingulate cortex in response to MA-related images. Additionally, the anterior cingulate region of mPFC activation during the MA-related cue-reactivity paradigm was positively correlated with craving alterations and previous frequency of drug use. No significant differences in brain activity in response to pornographic images were found between the two groups. Compared to MA cues, individuals with MA use disorder had increased activation in the occipital lobe when exposed to pornographic cues. In conclusion, the present study indicates that, even after long-term drug rehabilitation, individuals with MA use disorder have unique brain activity when exposed to MA-related cues. Additionally, our results illustrate that the libido brain response might be restored, and that sexual demand might be more robust than drug demand, in individuals with MA use disorder following long-term drug rehabilitation.

  15. Craving Responses to Methamphetamine and Sexual Visual Cues in Individuals With Methamphetamine Use Disorder After Long-Term Drug Rehabilitation.

    Science.gov (United States)

    Huang, Shucai; Zhang, Zhixue; Dai, Yuanyuan; Zhang, Changcun; Yang, Cheng; Fan, Lidan; Liu, Jun; Hao, Wei; Chen, Hongxian

    2018-01-01

    Studies utilizing functional magnetic resonance imaging (fMRI) cue-reactivity paradigms have demonstrated that short-term abstinent or current methamphetamine (MA) users have increased brain activity in the ventral striatum, caudate nucleus and medial frontal cortex, when exposed to MA-related visual cues. However, patterns of brain activity following cue-reactivity in subjects with long-term MA abstinence, especially long-term compulsory drug rehabilitation, have not been well studied. To enrich knowledge in this field, functional brain imaging was conducted during a cue-reactivity paradigm task in 28 individuals with MA use disorder following long-term compulsory drug rehabilitation, and 27 healthy control subjects. The results showed that, when compared with controls, individuals with MA use disorder displayed elevated activity in the bilateral medial prefrontal cortex (mPFC) and right lateral posterior cingulate cortex in response to MA-related images. Additionally, the anterior cingulate region of mPFC activation during the MA-related cue-reactivity paradigm was positively correlated with craving alterations and previous frequency of drug use. No significant differences in brain activity in response to pornographic images were found between the two groups. Compared to MA cues, individuals with MA use disorder had increased activation in the occipital lobe when exposed to pornographic cues. In conclusion, the present study indicates that, even after long-term drug rehabilitation, individuals with MA use disorder have unique brain activity when exposed to MA-related cues. Additionally, our results illustrate that the libido brain response might be restored, and that sexual demand might be more robust than drug demand, in individuals with MA use disorder following long-term drug rehabilitation.

  16. Looking into the future: An inward bias in aesthetic experience driven only by gaze cues.

    Science.gov (United States)

    Chen, Yi-Chia; Colombatto, Clara; Scholl, Brian J

    2018-07-01

    The inward bias is an especially powerful principle of aesthetic experience: In framed images (e.g. photographs), we prefer peripheral figures that face inward (vs. outward). Why does this bias exist? Since agents tend to act in the direction in which they are facing, one intriguing possibility is that the inward bias reflects a preference to view scenes from a perspective that will allow us to witness those predicted future actions. This account has been difficult to test with previous displays, in which facing direction is often confounded with either global shape profiles or the relative locations of salient features (since e.g. someone's face is generally more visually interesting than the back of their head). But here we demonstrate a robust inward bias in aesthetic judgment driven by a cue that is socially powerful but visually subtle: averted gaze. Subjects adjusted the positions of people in images to maximize the images' aesthetic appeal. People with direct gaze were not placed preferentially in particular regions, but people with averted gaze were reliably placed so that they appeared to be looking inward. This demonstrates that the inward bias can arise from visually subtle features, when those features signal how future events may unfold. Copyright © 2018. Published by Elsevier B.V.

  17. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining...... synthetic multimodal cues, from vision, haptics, and audition, in order to realize virtual experiences of walking on simulated ground surfaces or other features....

  18. Tailored information for cancer patients on the Internet: effects of visual cues and language complexity on information recall and satisfaction.

    NARCIS (Netherlands)

    Weert, J.C.M. van; Noort, G. van; Bol, N.; Dijk, L. van; Tates, K.; Jansen, J.

    2011-01-01

    Objective: This study was designed to investigate the effects of visual cues and language complexity on satisfaction and information recall using a personalised website for lung cancer patients. In addition, age effects were investigated. Methods: An experiment using a 2 (complex vs. non-complex

  19. Tailored information for cancer patients on the Internet: effects of visual cues and language complexity on information recall and satisfaction

    NARCIS (Netherlands)

    van Weert, J.C.M.; van Noort, G.; Bol, N.; van Dijk, L.; Tates, K.; Jansen, J.

    2011-01-01

    Objective This study was designed to investigate the effects of visual cues and language complexity on satisfaction and information recall using a personalised website for lung cancer patients. In addition, age effects were investigated. Methods An experiment using a 2 (complex vs. non-complex

  20. The invisible cues that guide king penguin chicks home: use of magnetic and acoustic cues during orientation and short-range navigation.

    Science.gov (United States)

    Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco

    2013-04-15

    King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.

  1. Lower region: a new cue for figure-ground assignment.

    Science.gov (United States)

    Vecera, Shaun P; Vogel, Edward K; Woodman, Geoffrey F

    2002-06-01

    Figure-ground assignment is an important visual process; humans recognize, attend to, and act on figures, not backgrounds. There are many visual cues for figure-ground assignment. A new cue to figure-ground assignment, called lower region, is presented: Regions in the lower portion of a stimulus array appear more figurelike than regions in the upper portion of the display. This phenomenon was explored, and it was demonstrated that the lower-region preference is not influenced by contrast, eye movements, or voluntary spatial attention. It was found that the lower region is defined relative to the stimulus display, linking the lower-region preference to pictorial depth perception cues. The results are discussed in terms of the environmental regularities that this new figure-ground cue may reflect.

  2. Using Retrieval Cues to Attenuate Return of Fear in Individuals With Public Speaking Anxiety.

    Science.gov (United States)

    Shin, Ki Eun; Newman, Michelle G

    2018-03-01

    Even after successful exposure, relapse is not uncommon. Based on the retrieval model of fear extinction (e.g., Vervliet, Craske, & Hermans, 2013), return of fear can occur after exposure due to an elapse of time (spontaneous recovery) or change in context (contextual renewal). The use of external salient stimuli presented throughout extinction (i.e., retrieval cues [RCs]) has been suggested as a potential solution to this problem (Bouton, 2002). The current study examined whether RCs attenuated return of fear in individuals with public speaking anxiety. Sixty-five participants completed a brief exposure while presented with two RC stimuli aimed at a variety of senses (visual, tactile, olfactory, and auditory). Later, half the participants were tested for return of fear in a context different from the exposure context, and the other half in the same context. Half of each context group were presented with the same cues as in exposure, while the other half were not. Return of fear due to an elapse of time, change in context, and effects of RCs were evaluated on subjective, behavioral, and physiological measures of anxiety. Although contextual renewal was not observed, results supported effects of RCs in reducing spontaneous recovery on behavioral and physiological measures of anxiety. There was also evidence that participants who were reminded of feeling anxious during exposure by the RCs benefited more from using them at follow-up, whereas those who perceived the cues as comforting (safety signals) benefited less. Clinical implications of the findings are discussed. Copyright © 2017. Published by Elsevier Ltd.

  3. Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues

    Directory of Open Access Journals (Sweden)

    W. H. Adams

    2003-02-01

    Full Text Available We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM, hidden Markov models (HMM, and support vector machines (SVM. Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.

  4. Non-hierarchical influence of visual form, touch and position cues on embodiment, agency and presence in virtual reality

    Directory of Open Access Journals (Sweden)

    Stephen Craig Pritchard

    2016-10-01

    Full Text Available The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence, and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step towards addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.

  5. Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality

    Science.gov (United States)

    Pritchard, Stephen C.; Zopf, Regine; Polito, Vince; Kaplan, David M.; Williams, Mark A.

    2016-01-01

    The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings. PMID:27826275

  6. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4

    OpenAIRE

    Braun, Jochen

    1994-01-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in ...

  7. Obese adults have visual attention bias for food cue images: evidence for altered reward system function.

    Science.gov (United States)

    Castellanos, E H; Charboneau, E; Dietrich, M S; Park, S; Bradley, B P; Mogg, K; Cowan, R L

    2009-09-01

    The major aim of this study was to investigate whether the motivational salience of food cues (as reflected by their attention-grabbing properties) differs between obese and normal-weight subjects in a manner consistent with altered reward system function in obesity. A total of 18 obese and 18 normal-weight, otherwise healthy, adult women between the ages of 18 and 35 participated in an eye-tracking paradigm in combination with a visual probe task. Eye movements and reaction time to food and non-food images were recorded during both fasted and fed conditions in a counterbalanced design. Eating behavior and hunger level were assessed by self-report measures. Obese individuals had higher scores than normal-weight individuals on self-report measures of responsiveness to external food cues and vulnerability to disruptions in control of eating behavior. Both obese and normal-weight individuals demonstrated increased gaze duration for food compared to non-food images in the fasted condition. In the fed condition, however, despite reduced hunger in both groups, obese individuals maintained the increased attention to food images, whereas normal-weight individuals had similar gaze duration for food and non-food images. Additionally, obese individuals had preferential orienting toward food images at the onset of each image. Obese and normal-weight individuals did not differ in reaction time measures in the fasted or fed condition. Food cue incentive salience is elevated equally in normal-weight and obese individuals during fasting. Obese individuals retain incentive salience for food cues despite feeding and decreased self-report of hunger. Sensitization to food cues in the environment and their dysregulation in obese individuals may play a role in the development and/or maintenance of obesity.

  8. Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2,000 ms.

    Science.gov (United States)

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2018-04-25

    Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.

  9. The (unclear effects of invalid retro-cues.

    Directory of Open Access Journals (Sweden)

    Marcel eGressmann

    2016-03-01

    Full Text Available Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.

  10. Network model of top-down influences on local gain and contextual interactions in visual cortex.

    Science.gov (United States)

    Piëch, Valentin; Li, Wu; Reeke, George N; Gilbert, Charles D

    2013-10-22

    The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.

  11. Working memory load and the retro-cue effect: A diffusion model account.

    Science.gov (United States)

    Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S

    2018-02-01

    Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Early and late inhibitions elicited by a peripheral visual cue on manual response to a visual target: Are they based on Cartesian coordinates?

    Directory of Open Access Journals (Sweden)

    Fábio V. Magalhães

    2005-01-01

    Full Text Available A non-informative cue (C elicits an inhibition of manual reaction time (MRT to a visual target (T. We report an experiment to examine if the spatial distribution of this inhibitory effect follows Polar or Cartesian coordinate systems. C appeared at one out of 8 isoeccentric (7o positions, the C-T angular distances (in polar coordinates were 0º or multiples of 45º and ISI were 100 or 800ms. Our main findings were: (a MRT was maximal when C- T distance was 0o and minimal when C-T distance was 180o and (b besides an angular distance effect, there is a meridian effect. When C and T occurred in the same quadrant, MRT was longer than when T and C occurred at the same distance (45o but on different sides of vertical or horizontal meridians. The latter finding indicates that the spatial distribution of the cue inhibitory effects is based on a Cartesian coordinate system.

  13. Matching cue size and task properties in exogenous attention.

    Science.gov (United States)

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  14. Processing of visual food cues during bitter taste perception in female patients with binge-eating symptoms: A cross-modal ERP study.

    Science.gov (United States)

    Schienle, Anne; Scharmüller, Wilfried; Schwab, Daniela

    2017-11-01

    In healthy individuals, the perception of an intense bitter taste decreased the reward value of visual food cues, as reflected by the reduction of a specific event-related brain potential (ERP), frontal late positivity. The current cross-modal ERP study investigated responses of female patients with binge-eating symptoms (BES) to this type of visual-gustatory stimulation. Women with BES (n=36) and female control participants (n=38) viewed food images after they rinsed their mouth with either bitter wormwood tea or water. Relative to controls, the patients showed elevated late positivity (LPP: 400-700ms) to the food images in the bitter condition. The LPP source was located in the medial prefrontal cortex. Both groups did not differ in the ratings for the fluids (intensity, bitterness, disgust). This ERP study showed that a bitter taste did not decrease late positivity to visual food cues (reflecting food reward) in women with BES. The atypical bitter responding might be a biological marker of this condition and possibly contributes to overeating. Future studies should additionally record food intake behavior to further investigate this mechanism. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  15. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  16. Out of sight, out of mind: racial retrieval cues increase the accessibility of social justice concepts.

    Science.gov (United States)

    Salter, Phia S; Kelley, Nicholas J; Molina, Ludwin E; Thai, Luyen T

    2017-09-01

    Photographs provide critical retrieval cues for personal remembering, but few studies have considered this phenomenon at the collective level. In this research, we examined the psychological consequences of visual attention to the presence (or absence) of racially charged retrieval cues within American racial segregation photographs. We hypothesised that attention to racial retrieval cues embedded in historical photographs would increase social justice concept accessibility. In Study 1, we recorded gaze patterns with an eye-tracker among participants viewing images that contained racial retrieval cues or were digitally manipulated to remove them. In Study 2, we manipulated participants' gaze behaviour by either directing visual attention toward racial retrieval cues, away from racial retrieval cues, or directing attention within photographs where racial retrieval cues were missing. Across Studies 1 and 2, visual attention to racial retrieval cues in photographs documenting historical segregation predicted social justice concept accessibility.

  17. Encoding Specificity and Nonverbal Cue Context: An Expansion of Episodic Memory Research.

    Science.gov (United States)

    Woodall, W. Gill; Folger, Joseph P.

    1981-01-01

    Reports two studies demonstrating the ability of nonverbal contextual cues to act as retrieval mechanisms for co-occurring language. Suggests that visual contextual cues, such as speech primacy and motor primacy gestures, can access linguistic target information. Motor primacy cues are shown to act as stronger retrieval cues. (JMF)

  18. Visual attention to alcohol cues and responsible drinking statements within alcohol advertisements and public health campaigns: Relationships with drinking intentions and alcohol consumption in the laboratory.

    Science.gov (United States)

    Kersbergen, Inge; Field, Matt

    2017-06-01

    Both alcohol advertising and public health campaigns increase alcohol consumption in the short term, and this may be attributable to attentional capture by alcohol-related cues in both types of media. The present studies investigated the association between (a) visual attention to alcohol cues and responsible drinking statements in alcohol advertising and public health campaigns, and (b) next-week drinking intentions (Study 1) and drinking behavior in the lab (Study 2). In Study 1, 90 male participants viewed 1 of 3 TV alcohol adverts (conventional advert; advert that emphasized responsible drinking; or public health campaign; between-subjects manipulation) while their visual attention to alcohol cues and responsible drinking statements was recorded, before reporting their drinking intentions. Study 2 used a within-subjects design in which 62 participants (27% male) viewed alcohol and soda advertisements while their attention to alcohol/soda cues and responsible drinking statements was recorded, before completing a bogus taste test with different alcoholic and nonalcoholic drinks. In both studies, alcohol cues attracted more attention than responsible drinking statements, except when viewing a public health TV campaign. Attention to responsible drinking statements was not associated with intentions to drink alcohol over the next week (Study 1) or alcohol consumption in the lab (Study 2). However, attention to alcohol portrayal cues within alcohol advertisements was associated with ad lib alcohol consumption in Study 2, although attention to other types of alcohol cues (brand logos, glassware, and packaging) was not associated. Future studies should investigate how responsible drinking statements might be improved to attract more attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Priming and the guidance by visual and categorical templates in visual search

    Directory of Open Access Journals (Sweden)

    Anna eWilschut

    2014-02-01

    Full Text Available Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity towards the target feature, i.e. the extent to which observers searched selectively among items of the cued versus uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  20. Priming and the guidance by visual and categorical templates in visual search.

    Science.gov (United States)

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  1. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Science.gov (United States)

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  2. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Directory of Open Access Journals (Sweden)

    Scott A Stone

    Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  3. Gaze Cueing by Pareidolia Faces

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2013-12-01

    Full Text Available Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon. While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  4. Gaze cueing by pareidolia faces.

    Science.gov (United States)

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  5. Gaze Cueing by Pareidolia Faces

    OpenAIRE

    Kohske Takahashi; Katsumi Watanabe

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cuei...

  6. Visual Sensory Signals Dominate Tactile Cues during Docked Feeding in Hummingbirds.

    Science.gov (United States)

    Goller, Benjamin; Segre, Paolo S; Middleton, Kevin M; Dickinson, Michael H; Altshuler, Douglas L

    2017-01-01

    direction of the feeder motion. These results suggest that docked hummingbirds are using visual information about the environment to maintain body position and orientation, and not actively tracking the motion of the feeder. The absence of flower tracking behavior in hummingbirds contrasts with the behavior of hawkmoths, and provides evidence that they rely primarily on the visual background rather than flower-based cues while feeding.

  7. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  8. Salient Object Detection via Structured Matrix Decomposition.

    Science.gov (United States)

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  9. Consciousness wanted, attention found: Reasons for the advantage of the left visual field in identifying T2 among rapidly presented series.

    Science.gov (United States)

    Verleger, Rolf; Śmigasiewicz, Kamila

    2015-09-01

    Everyday experience suggests that people are equally aware of events in both hemi-fields. However, when two streams of stimuli are rapidly presented left and right containing two targets, the second target is better identified in the left than in the right visual field. This might be considered evidence for a right-hemisphere advantage in generating conscious percepts. However, this putative asymmetry of conscious perception cannot be measured independently of participants' access to their conscious percepts, and there is actually evidence from split-brain patients for the reverse, left-hemisphere advantage in having access to conscious percepts. Several other topics were studied in search of the responsible mechanism, among others: Mutual inhibition of hemispheres, cooperation of hemispheres in perceiving midline stimuli, and asymmetries in processing various perceptual inputs. Directing attention by salient cues turned out to be one of the few mechanisms capable of modifying the left visual-field advantage in this paradigm. Thus, this left visual-field advantage is best explained by the notion of a right-hemisphere advantage in directing attention to salient events. Dovetailing with the pathological asymmetries of attention after right-hemisphere lesions and with asymmetries of brain activation when healthy participants shift their attention, the present results extend that body of evidence by demonstrating unusually large and reliable behavioral asymmetries for attention-directing processes in healthy participants. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Towards a framework for attention cueing in instructional animations: Guidelines for research and design

    NARCIS (Netherlands)

    B.B. de Koning (Björn); H.K. Tabbers (Huib); R.M.J.P. Rikers (Remy); G.W.C. Paas (Fred)

    2009-01-01

    textabstractThis paper examines the transferability of successful cueing approaches from text and static visualization research to animations. Theories of visual attention and learning as well as empirical evidence for the instructional effectiveness of attention cueing are reviewed and, based on

  11. The Joint Effects of Spatial Cueing and Transcranial Direct Current Stimulation on Visual Acuity

    Directory of Open Access Journals (Sweden)

    Taly Bonder

    2018-02-01

    Full Text Available The present study examined the mutual influence of cortical neuroenhancement and allocation of spatial attention on perception. Specifically, it explored the effects of transcranial Direct Current Stimulation (tDCS on visual acuity measured with a Landolt gap task and attentional precues. The exogenous cues were used to draw attention either to the location of the target or away from it, generating significant performance benefits and costs. Anodal tDCS applied to posterior occipital area for 15 min improved performance during stimulation, reflecting heightened visual acuity. Reaction times were lower, and accuracy was higher in the tDCS group, compared to a sham control group. Additionally, in post-stimulation trials tDCS significantly interacted with the effect of precuing. Reaction times were lower in valid cued trials (benefit and higher in invalid trials (cost compared to neutrally cued trials, the effect which was pronounced stronger in tDCS group than in sham control group. The increase of cost and benefit effects in the tDCS group was of a similar magnitude, suggesting that anodal tDCS influenced the overall process of attention orienting. The observed interaction between the stimulation of the visual cortex and precueing indicates a magnification of attention modulation.

  12. Aging and involuntary attention capture: electrophysiological evidence for preserved attentional control with advanced age.

    Science.gov (United States)

    Lien, Mei-Ching; Gemperle, Alison; Ruthruff, Eric

    2011-03-01

    The present study examined whether people become more susceptible to capture by salient objects as they age. Participants searched a target display for a letter in a specific color and indicated its identity. In Experiment 1, this target display was preceded by a non-informative cue display containing one target-color box, one ignored-color box, and two white boxes. On half of the trials, this cue display also contained a salient-but-irrelevant abrupt onset. To assess capture by the target-color cue, we used the N2pc component of the event-related potential, thought to reflect attentional allocation to the left or right visual field. The target-color box in the cue display produced a substantial N2pc effect for younger adults and, most importantly, this effect was not diminished by the presence of an abrupt onset. Therefore, the abrupt onset was unable to capture attention away from the target-color cue. Critically, older adults demonstrated the same resistance to capture by the abrupt onset. Experiment 2 extended these findings to irrelevant color singleton cues. Thus, we argue that the ability to attend to relevant stimuli and resist capture by salient-but-irrelevant stimuli is preserved with advancing age. (c) 2011 APA, all rights reserved.

  13. A model for the pilot's use of motion cues in roll-axis tracking tasks

    Science.gov (United States)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  14. The Effects of Cues on Neurons in the Basal Ganglia in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Sridevi V. Sarma

    2012-07-01

    Full Text Available Visual cues open a unique window to the understanding of Parkinson’s disease (PD. These cues can temporarily but dramatically improve PD motor symptoms. Although details are unclear, cues are believed to suppress pathological basal ganglia (BG activity through activation of corticostriatal pathways. In this study, we investigated human BG neurophysiology under different cued conditions. We evaluated bursting, 10-30Hz oscillations (OSCs, and directional tuning (DT dynamics in the subthalamic nucleus activity while 7 patients executed a two-step motor task. In the first step (predicted +cue, the patient moved to a target when prompted by a visual go cue that appeared 100% of the time. Here, the timing of the cue is predictable and the cue serves an external trigger to execute a motor plan. In the second step, the cue appeared randomly 50% of the time, and the patient had to move to the same target as in the first step. When it appeared (unpredicted +cue, the motor plan was to be triggered by the cue, but its timing was not predictable. When the cue failed to appear (unpredicted -cue, the motor plan was triggered by the absence of the visual cue. We found that during predicted +cue and unpredicted -cue trials, OSCs significantly decreased and DT significantly increased above baseline, though these modulations occurred an average of 640 milliseconds later in unpredicted -cue trials. Movement and reaction times were comparable in these trials. During unpredicted +cue trials, OSCs and DT failed to modulate though bursting significantly decreased after movement. Correspondingly, movement performance deteriorated. These findings suggest that during motor planning either a predictably timed external cue or an internally generated cue (generated by the absence of a cue trigger the execution of a motor plan in premotor cortex, whose increased activation then suppresses pathological activity in STN through direct pathways, leading to motor facilitation in

  15. Cortisol, but not intranasal insulin, affects the central processing of visual food cues.

    Science.gov (United States)

    Ferreira de Sá, Diana S; Schulz, André; Streit, Fabian E; Turner, Jonathan D; Oitzl, Melly S; Blumenthal, Terry D; Schächinger, Hartmut

    2014-12-01

    Stress glucocorticoids and insulin are important endocrine regulators of energy homeostasis, but little is known about their central interaction on the reward-related processing of food cues. According to a balanced group design, healthy food deprived men received either 40IU intranasal insulin (n=13), 30mg oral cortisol (n=12), both (n=15), or placebo (n=14). Acoustic startle responsiveness was assessed during presentation of food and non-food pictures. Cortisol enhanced startle responsiveness during visual presentation of "high glycemic" food pictures, but not during presentation of neutral and pleasant non-food pictures. Insulin had no effect. Based on the "frustrative nonreward" model these results suggest that the reward value of high glycemic food items is specifically increased by cortisol. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Odors as effective retrieval cues for stressful episodes.

    Science.gov (United States)

    Wiemers, Uta S; Sauvage, Magdalena M; Wolf, Oliver T

    2014-07-01

    Olfactory information seems to play a special role in memory due to the fast and direct processing of olfactory information in limbic areas like the amygdala and the hippocampus. This has led to the assumption that odors can serve as effective retrieval cues for autobiographic memories, especially emotional memories. The current study sought to investigate whether an olfactory cue can serve as an effective retrieval cue for memories of a stressful episode. A total of 95 participants were exposed to a psychosocial stressor or a well matching but not stressful control condition. During both conditions were visual objects present, either bound to the situation (central objects) or not (peripheral objects). Additionally, an ambient odor was present during both conditions. The next day, participants engaged in an unexpected object recognition task either under the influence of the same odor as was present during encoding (congruent odor) or another odor (non-congruent odor). Results show that stressed participants show a better memory for all objects and especially for central visual objects if recognition took place under influence of the congruent odor. An olfactory cue thus indeed seems to be an effective retrieval cue for stressful memories. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Effects of the timing and identity of retrieval cues in individual recall: an attempt to mimic cross-cueing in collaborative recall.

    Science.gov (United States)

    Andersson, Jan; Hitch, Graham; Meudell, Peter

    2006-01-01

    Inhibitory effects in collaborative recall have been attributed to cross-cueing among partners, in the same way that part-set cues are known to impair recall in individuals. However, studies of part-set cueing in individuals typically involve presenting cues visually at the start of recall, whereas cross-cueing in collaboration is likely to be spoken and distributed over time. In an attempt to bridge this gap, three experiments investigated effects of presenting spoken part-set or extra-list cues at different times during individual recall. Cues had an inhibitory effect on recollection in the early part of the recall period, especially when presented in immediate succession at the start of recall. There was no difference between the effects of part-set and extra-list cues under these presentation conditions. However, more inhibition was generated by part-set than extra-list cues when cue presentation was distributed throughout recall. These results are interpreted as suggesting that cues presented during recall disrupt memory in two ways, corresponding to either blocking or modifying retrieval processes. Implications for explaining and possibly ameliorating inhibitory effects in collaborative recall are discussed.

  18. Multimedia instructions and cognitive load theory: effects of modality and cueing.

    Science.gov (United States)

    Tabbers, Huib K; Martens, Rob L; van Merriënboer, Jeroen J G

    2004-03-01

    Recent research on the influence of presentation format on the effectiveness of multimedia instructions has yielded some interesting results. According to cognitive load theory (Sweller, Van Merriënboer, & Paas, 1998) and Mayer's theory of multimedia learning (Mayer, 2001), replacing visual text with spoken text (the modality effect) and adding visual cues relating elements of a picture to the text (the cueing effect) both increase the effectiveness of multimedia instructions in terms of better learning results or less mental effort spent. The aim of this study was to test the generalisability of the modality and cueing effect in a classroom setting. The participants were 111 second-year students from the Department of Education at the University of Gent in Belgium (age between 19 and 25 years). The participants studied a web-based multimedia lesson on instructional design for about one hour. Afterwards they completed a retention and a transfer test. During both the instruction and the tests, self-report measures of mental effort were administered. Adding visual cues to the pictures resulted in higher retention scores, while replacing visual text with spoken text resulted in lower retention and transfer scores. Only a weak cueing effect and even a reverse modality effect have been found, indicating that both effects do not easily generalise to non-laboratory settings. A possible explanation for the reversed modality effect is that the multimedia instructions in this study were learner-paced, as opposed to the system-paced instructions used in earlier research.

  19. Olfactory cues are more effective than visual cues in experimentally triggering autobiographical memories.

    Science.gov (United States)

    de Bruijn, Maaike J; Bender, Michael

    2018-04-01

    Folk wisdom often refers to odours as potent triggers for autobiographical memory, akin to the Proust phenomenon that describes Proust's sudden recollection of a childhood memory when tasting a madeleine dipped into tea. Despite an increasing number of empirical studies on the effects of odours on cognition, conclusive evidence is still missing. We set out to examine the effectiveness of childhood and non-childhood odours as retrieval cues for autobiographical memories in a lab experiment. A total of 170 participants were presented with pilot-tested retrieval cues (either odours or images) to recall childhood memories and were then asked to rate the vividness, detail, and emotional intensity of these memories. Results showed that participants indeed reported richer memories when presented with childhood-related odours than childhood-related images or childhood-unrelated odours or images. An exploratory analysis of memory content with Linguistic Inquiry and Word Count did not reveal differences in affective content. The findings of this study support the notion that odours are particularly potent in eliciting rich memories and open up numerous avenues for further exploration.

  20. Retro-cue benefits in working memory without sustained focal attention.

    Science.gov (United States)

    Rerko, Laura; Souza, Alessandra S; Oberauer, Klaus

    2014-07-01

    In working memory (WM) tasks, performance can be boosted by directing attention to one memory object: When a retro-cue in the retention interval indicates which object will be tested, responding is faster and more accurate (the retro-cue benefit). We tested whether the retro-cue benefit in WM depends on sustained attention to the cued object by inserting an attention-demanding interruption task between the retro-cue and the memory test. In the first experiment, the interruption task required participants to shift their visual attention away from the cued representation and to a visual classification task on colors. In the second and third experiments, the interruption task required participants to shift their focal attention within WM: Attention was directed away from the cued representation by probing another representation from the memory array prior to probing the cued object. The retro-cue benefit was not attenuated by shifts of perceptual attention or by shifts of attention within WM. We concluded that sustained attention is not needed to maintain the cued representation in a state of heightened accessibility.

  1. Visual form predictions facilitate auditory processing at the N1.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  2. Visual cues are relevant in behavioral control measures for Cosmopolites sordidus (Coleoptera: Curculionidae).

    Science.gov (United States)

    Reddy, Gadi V P; Raman, A

    2011-04-01

    Trap designs for banana root borer, Cosmopolites sordidus (Germar) (Coleoptera: Curculionidae), have been done essentially on the understanding that C. sordidus rely primarily on chemical cues. Our present results indicate that these borers also rely on visual cues. Previous studies have demonstrated that among the eight differently colored traps tested in the field, brown traps were the most effective compared with the performances of yellow, red, gray, blue, black, white, and green traps; mahogany-brown was more effective than other shades of brown.In the current study, efficiency of ground traps with different colors was evaluated in the laboratory for the capture of C. sordidius. Response of C. sordidus to pheromone-baited ground traps of several different colors (used either individually or as 1:1 mixtures of two different colors) were compared with the standardized mahogany-brown traps. Traps with mahogany-brown mixed with different colors had no significant effect. In contrast, a laboratory color-choice tests indicated C. sordidus preferred black traps over other color traps, with no specific preferences for different shades of black. Here again, traps with black mixed with other colors (1:1) had no influence on the catches. Therefore, any other color that mixes with mahogany-brown or black does not cause color-specific dilution of attractiveness. By exploiting these results, it may be possible to produce efficacious trapping systems that could be used in a behavioral approach to banana root borer control.

  3. Prospective memory in multiple sclerosis: The impact of cue distinctiveness and executive functioning.

    Science.gov (United States)

    Dagenais, Emmanuelle; Rouleau, Isabelle; Tremblay, Alexandra; Demers, Mélanie; Roger, Élaine; Jobin, Céline; Duquette, Pierre

    2016-11-01

    Prospective memory (PM), the ability to remember to do something at the appropriate time in the future, is crucial in everyday life. One way to improve PM performance is to increase the salience of a cue announcing that it is time to act. Multiple sclerosis (MS) patients often report PM failures and there is growing evidence of PM deficits among this population. However, such deficits are poorly characterized and their relation to cognitive status remains unclear. To better understand PM deficits in MS patients, this study investigated the impact of cue salience on PM, and its relation to retrospective memory (RM) and executive deficits. Thirty-nine (39) MS patients were compared to 18 healthy controls on a PM task modulating cue salience during an ongoing general knowledge test. MS patients performed worse than controls on the PM task, regardless of cue salience. MS patients' executive functions contributed significantly to the variance in PM performance, whereas age, education and RM did not. Interestingly, low- and high-executive patients' performance differed when the cue was not salient, but not when it was, suggesting that low-executive MS patients benefited more from cue salience. These findings add to the growing evidence of PM deficits in MS and highlight the contribution of executive functions to certain aspects of PM. In low-executive MS patients, high cue salience improves PM performance by reducing the detection threshold and need for environmental monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Visual selective attention in amnestic mild cognitive impairment.

    Science.gov (United States)

    McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E

    2014-11-01

    Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  6. AVATAR -- Adaptive Visualization Aid for Touring And Recovery

    Energy Technology Data Exchange (ETDEWEB)

    L. O. Hall; K. W. Bowyer; N. Chawla; T. Moore, Jr.; W. P. Kegelmeyer

    2000-01-01

    This document provides a report on the initial development of software which uses a standard visualization tool to determine, label and display salient regions in large 3D physics simulation datasets. This software uses parallel pattern recognition behind the scenes to handle the huge volume of data. This software is called AVATAR (Adaptive Visualization Aid for Touring and Recovery). It integrates approaches to gathering labeled training data, learning from large training sets utilizing parallelism and the final display of salient data in unseen visualization data sets. The paper uses vorticity fields for a large-eddy simulation to illustrate the method.

  7. Visible propagation from invisible exogenous cueing.

    Science.gov (United States)

    Lin, Zhicheng; Murray, Scott O

    2013-09-20

    Perception and performance is affected not just by what we see but also by what we do not see-inputs that escape our awareness. While conscious processing and unconscious processing have been assumed to be separate and independent, here we report the propagation of unconscious exogenous cueing as determined by conscious motion perception. In a paradigm combining masked exogenous cueing and apparent motion, we show that, when an onset cue was rendered invisible, the unconscious exogenous cueing effect traveled, manifesting at uncued locations (4° apart) in accordance with conscious perception of visual motion; the effect diminished when the cue-to-target distance was 8° apart. In contrast, conscious exogenous cueing manifested in both distances. Further evidence reveals that the unconscious and conscious nonretinotopic effects could not be explained by an attentional gradient, nor by bottom-up, energy-based motion mechanisms, but rather they were subserved by top-down, tracking-based motion mechanisms. We thus term these effects mobile cueing. Taken together, unconscious mobile cueing effects (a) demonstrate a previously unknown degree of flexibility of unconscious exogenous attention; (b) embody a simultaneous dissociation and association of attention and consciousness, in which exogenous attention can occur without cue awareness ("dissociation"), yet at the same time its effect is contingent on conscious motion tracking ("association"); and (c) underscore the interaction of conscious and unconscious processing, providing evidence for an unconscious effect that is not automatic but controlled.

  8. Effects of cue-exposure treatment on neural cue reactivity in alcohol dependence: a randomized trial.

    Science.gov (United States)

    Vollstädt-Klein, Sabine; Loeber, Sabine; Kirsch, Martina; Bach, Patrick; Richter, Anne; Bühler, Mira; von der Goltz, Christoph; Hermann, Derik; Mann, Karl; Kiefer, Falk

    2011-06-01

    In alcohol-dependent patients, alcohol-associated cues elicit brain activation in mesocorticolimbic networks involved in relapse mechanisms. Cue-exposure based extinction training (CET) has been shown to be efficacious in the treatment of alcoholism; however, it has remained unexplored whether CET mediates its therapeutic effects via changes of activity in mesolimbic networks in response to alcohol cues. In this study, we assessed CET treatment effects on cue-induced responses using functional magnetic resonance imaging (fMRI). In a randomized controlled trial, abstinent alcohol-dependent patients were randomly assigned to a CET group (n = 15) or a control group (n = 15). All patients underwent an extended detoxification treatment comprising medically supervised detoxification, health education, and supportive therapy. The CET patients additionally received nine CET sessions over 3 weeks, exposing the patient to his/her preferred alcoholic beverage. Cue-induced fMRI activation to alcohol cues was measured at pretreatment and posttreatment. Compared with pretreatment, fMRI cue-reactivity reduction was greater in the CET relative to the control group, especially in the anterior cingulate gyrus and the insula, as well as limbic and frontal regions. Before treatment, increased cue-induced fMRI activation was found in limbic and reward-related brain regions and in visual areas. After treatment, the CET group showed less activation than the control group in the left ventral striatum. The study provides first evidence that an exposure-based psychotherapeutic intervention in the treatment of alcoholism impacts on brain areas relevant for addiction memory and attentional focus to alcohol-associated cues and affects mesocorticolimbic reward pathways suggested to be pathophysiologically involved in addiction. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  9. Implied Spatial Meaning and Visuospatial Bias: Conceptual Processing Influences Processing of Visual Targets and Distractors.

    Directory of Open Access Journals (Sweden)

    Davood G Gozli

    Full Text Available Concepts with implicit spatial meaning (e.g., "hat", "boots" can bias visual attention in space. This result is typically found in experiments with a single visual target per trial, which can appear at one of two locations (e.g., above vs. below. Furthermore, the interaction is typically found in the form of speeded responses to targets appearing at the compatible location (e.g., faster responses to a target above fixation, after reading "hat". It has been argued that these concept-space interactions could also result from experimentally-induced associations between the binary set of locations and the conceptual categories with upward and downward meaning. Thus, rather than reflecting a conceptually driven spatial bias, the effect could reflect a benefit for compatible cue-target sequences that occurs only after target onset. We addressed these concerns by going beyond a binary set of locations and employing a search display consisting of four items (above, below, left, and right. Within each search trial, before performing a visual search task, participants performed a conceptual task involving concepts with implicit upward or downward meaning. The search display, in addition to including a target, could also include a salient distractor. Assuming a conceptually driven visual bias, we expected to observe, first, a benefit for target processing at the compatible location and, second, an increase in the cost of the salient distractor. The findings confirmed both predictions, suggesting that concepts do indeed generate a spatial bias. Finally, results from a control experiment, without the conceptual task, suggest the presence of an axis-specific effect, in addition to the location-specific effect, suggesting that concepts might cause both location-specific and axis-specific spatial bias. Taken together, our findings provide additional support for the involvement of spatial processing in conceptual understanding.

  10. Eye tracking for visual marketing

    NARCIS (Netherlands)

    Wedel, M.; Pieters, R.

    2008-01-01

    We provide the theory of visual attention and eye-movements that serves as a basis for evaluating eye-tracking research and for discussing salient and emerging issues in visual marketing. Motivated from its rising importance in marketing practice and its potential for theoretical contribution, we

  11. Alcohol-cue exposure effects on craving and attentional bias in underage college-student drinkers.

    Science.gov (United States)

    Ramirez, Jason J; Monti, Peter M; Colwill, Ruth M

    2015-06-01

    The effect of alcohol-cue exposure on eliciting craving has been well documented, and numerous theoretical models assert that craving is a clinically significant construct central to the motivation and maintenance of alcohol-seeking behavior. Furthermore, some theories propose a relationship between craving and attention, such that cue-induced increases in craving bias attention toward alcohol cues, which, in turn, perpetuates craving. This study examined the extent to which alcohol cues induce craving and bias attention toward alcohol cues among underage college-student drinkers. We designed within-subject cue-reactivity and visual-probe tasks to assess in vivo alcohol-cue exposure effects on craving and attentional bias on 39 undergraduate college drinkers (ages 18-20). Participants expressed greater subjective craving to drink alcohol following in vivo cue exposure to a commonly consumed beer compared with water exposure. Furthermore, following alcohol-cue exposure, participants exhibited greater attentional biases toward alcohol cues as measured by a visual-probe task. In addition to the cue-exposure effects on craving and attentional bias, within-subject differences in craving across sessions marginally predicted within-subject differences in attentional bias. Implications for both theory and practice are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  12. Contextual Cueing Improves Attentional Guidance, Even When Guidance Is Supposedly Optimal

    OpenAIRE

    Harris, A. M.; Remington, R. W.

    2017-01-01

    Visual search through previously encountered contexts typically produces reduced reaction times compared with search through novel contexts. This contextual cueing benefit is well established, but there is debate regarding its underlying mechanisms. Eye-tracking studies have consistently shown reduced number of fixations with repetition, supporting improvements in attentional guidance as the source of contextual cueing. However, contextual cueing benefits have been shown in conditions in whic...

  13. Cognitive processes facilitated by contextual cueing: evidence from event-related brain potentials.

    Science.gov (United States)

    Schankin, Andrea; Schubö, Anna

    2009-05-01

    Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.

  14. Attention biases visual activity in visual short-term memory.

    Science.gov (United States)

    Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina

    2014-07-01

    In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.

  15. A magnetoencephalography study of visual processing of pain anticipation.

    Science.gov (United States)

    Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C

    2014-07-15

    Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.

  16. Evaluación del uso de señales visuales y de localización por el colibrí cola-ancha (Selasphorus platycercus al visitar flores de Penstemon roseus Evaluation of the use of visual and location cues by the Broad-tailed hummingbird (Selasphorus platycercusforaging in flowers of Penstemon roseus

    Directory of Open Access Journals (Sweden)

    Guillermo Pérez

    2012-03-01

    Full Text Available En los colibríes la memoria espacial desempeña un papel importante durante el forrajeo. Éste se basa en el uso de señales específicas (visuales o en señales espaciales (localización de flores y plantas con néctar. Sin embargo, el uso de estas señales por los colibríes puede variar de acuerdo con la escala espacial que enfrentan cuando visitan flores de una o más plantas durante el forrajeo; ésto se puso a prueba con individuos del colibrí cola-ancha Selasphorus platycercus. Por otro lado, para evaluar la posible variación en el uso de las señales, se llevaron a cabo experimentos en condiciones semi-naturales utilizando flores de la planta Penstemon roseus, nativa del sitio de estudio. A través de la manipulación de la presencia/ausencia de una recompensa (néctar y señales visuales, evaluamos el uso de la memoria espacial durante el forrajeo entre 2 plantas (experimento 1 y dentro de una sola planta (experimento 2. Los resultados demostraron que los colibríes utilizaron la memoria de localización de la planta de cuyas flores obtuvieron recompensa, independientemente de la presencia de señales visuales. Por el contrario, en flores individuales de una sola planta, después de un corto periodo de aprendizaje los colibríes pueden utilizar las señales visuales para guiar su forrajeo y discriminar las flores sin recompensa. Asimismo, en ausencia de señales visuales los individuos basaron su forrajeo en la memoria de localización de la flor con recompensa visitada previamente. Estos resultados sugieren plasticidad en el comportamiento de forrajeo de los colibríes influenciada por la escala espacial y por la información adquirida en visitas previas.In hummingbirds spatial memory plays an important role during foraging. It is based in use of specific cues (visual or spatial cues (location of flowers and plants with nectar. However, use of these cues by hummingbirds may change according to the spatial scale they face when visit

  17. Dim target detection method based on salient graph fusion

    Science.gov (United States)

    Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun

    2018-02-01

    Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.

  18. Snack intake is reduced using an implicit, high-level construal cue.

    Science.gov (United States)

    Price, Menna; Higgs, Suzanne; Lee, Michelle

    2016-08-01

    Priming a high level construal has been shown to enhance self-control and reduce preference for indulgent food. Subtle visual cues have been shown to enhance the effects of a priming procedure. The current study therefore examined the combined impact of construal level and a visual cue reminder on the consumption of energy-dense snacks. A student and community-based adult sample with a wide age and body mass index (BMI) range (N = 176) were randomly assigned to a high or low construal condition in which a novel symbol was embedded. Afterward participants completed a taste test of ad libitum snack foods in the presence or absence of the symbol. The high (vs. the low) construal level prime successfully generated more abstract responses (p snacks in the presence of a visual cue-reminder. This may be a practical technique for reducing overeating and has the potential to be extended to other unhealthy behaviors. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. The importance of surface-based cues for face discrimination in non-human primates.

    Science.gov (United States)

    Parr, Lisa A; Taubert, Jessica

    2011-07-07

    Understanding how individual identity is processed from faces remains a complex problem. Contrast reversal, showing faces in photographic negative, impairs face recognition in humans and demonstrates the importance of surface-based information (shading and pigmentation) in face recognition. We tested the importance of contrast information for face encoding in chimpanzees and rhesus monkeys using a computerized face-matching task. Results showed that contrast reversal (positive to negative) selectively impaired face processing in these two species, although the impairment was greater for chimpanzees. Unlike chimpanzees, however, monkeys performed just as well matching negative to positive faces, suggesting that they retained some ability to extract identity information from negative faces. A control task showed that chimpanzees, but not rhesus monkeys, performed significantly better matching face parts compared with whole faces after a contrast reversal, suggesting that contrast reversal acts selectively on face processing, rather than general visual-processing mechanisms. These results confirm the importance of surface-based cues for face processing in chimpanzees and humans, while the results were less salient for rhesus monkeys. These findings make a significant contribution to understanding the evolution of cognitive specializations for face processing among primates, and suggest potential differences between monkeys and apes.

  20. Facial motion engages predictive visual mechanisms.

    Directory of Open Access Journals (Sweden)

    Jordy Kaufman

    Full Text Available We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent or a different emotion (incongruent with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.

  1. Effect of Cue Timing and Modality on Gait Initiation in Parkinson Disease With Freezing of Gait.

    Science.gov (United States)

    Lu, Chiahao; Amundsen Huffmaster, Sommer L; Tuite, Paul J; Vachon, Jacqueline M; MacKinnon, Colum D

    2017-07-01

    To examine the effects of cue timing, across 3 sensory modalities, on anticipatory postural adjustments (APAs) during gait initiation in people with Parkinson disease (PD). Observational study. Biomechanics research laboratory. Individuals with idiopathic PD (N=25; 11 with freezing of gait [FOG]) were studied in the off-medication state (12-h overnight withdrawal). Gait initiation was tested without cueing (self-initiated) and with 3 cue timing protocols: fixed delay (3s), random delay (4-12s), and countdown (3-2-1-go, 1-s intervals) across 3 sensory modalities (acoustic, visual, and vibrotactile). The incidence and spatiotemporal characteristics of APAs during gait initiation were analyzed, including vertical ground reaction forces and center of pressure. All cue timings and modalities increased the incidence and amplitude of APAs compared with self-initiated stepping. Acoustic and visual cues, but not vibrotactile stimulation, improved the timing of APAs. Fixed delay or countdown timing protocols were more effective at decreasing APA durations than random delay cues. Cue-evoked improvements in APA timing, but not amplitude, correlated with the level of impairment during self-initiated gait. Cues did not improve the late push-off phase in the FOG group. External cueing improves gait initiation in PD regardless of cue timing, modality, or clinical phenotype (with and without FOG). Acoustic or visual cueing with predictive timing provided the greatest improvements in gait initiation; therefore, these protocols may provide the best outcomes when applied by caregivers or devices. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. Magnitude and duration of cue-induced craving for marijuana in volunteers with cannabis use disorder.

    Science.gov (United States)

    Lundahl, Leslie H; Greenwald, Mark K

    2016-09-01

    Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)-related cues in cannabis dependent volunteers. 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Magnitude and duration of cue-induced craving for marijuana in volunteers with cannabis use disorder

    Science.gov (United States)

    Lundahl, Leslie H.; Greenwald, Mark K.

    2016-01-01

    Aims Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)–related cues in cannabis dependent volunteers. Methods 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. Results In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Conclusions Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. PMID:27436749

  4. The effect of visual salience on memory-based choices.

    Science.gov (United States)

    Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J

    2014-02-01

    Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.

  5. Missing depth cues in virtual reality limit performance and quality of three dimensional reaching movements.

    Science.gov (United States)

    Gerig, Nicolas; Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter

    2018-01-01

    Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth

  6. Proust nose best: odors are better cues of autobiographical memory.

    Science.gov (United States)

    Chu, Simon; Downes, John J

    2002-06-01

    The Proust phenomenon is an enduring piece of folk wisdom that asserts that odors are particularly powerful autobiographical memory cues. We provide a more formal exposition of this phenomenon and test it in two experiments, using a novel double-cuing methodology designed to negate less interesting explanations. In both studies, recall of an autobiographical event was initially cued by a verbal label (an odor name) for a fixed period, following which a second, extended recall attempt was cued by the same verbal label, the relevant odor, an irrelevant odor, or a visual cue. The focus of Experiment 1 was participants' ratings of the emotional quality of their autobiographical memories. In Experiment 2, content analysis was employed to determine the quantity of information in participants' recollections. Results revealed that odor-cued autobiographical memories were reliably different in terms of qualitative ratings and reliably superior in the amount of detail yielded. Moreover, visual cues and incongruent olfactory cues appeared to have a detrimental effect on the amount of detail recalled. These results support the proposal that odors are especially effective as reminders of past experience.

  7. Observing how others lift light or heavy objects: which visual cues mediate the encoding of muscular force in the primary motor cortex?

    Science.gov (United States)

    Alaerts, Kaat; Swinnen, Stephan P; Wenderoth, Nicole

    2010-06-01

    Observers are able to judge quite accurately the weights lifted by others. Only recently, neuroscience has focused on the role of the motor system to accomplish this task. In this respect, a previous transcranial magnetic stimulation (TMS) study showed that the muscular force requirements of an observed action are encoded by the primary motor cortex (M1). Overall, three distinct visual sources may provide information on the applied force of an observed lifting action, namely, (i) the perceived kinematics, (ii) the hand contraction state and finally (iii) intrinsic object properties. The principal aim of the present study was to disentangle these three visual sources and to explore their importance in mediating the encoding of muscular force requirements in the observer's motor system. A series of experiments are reported in which TMS was used to measure 'force-related' responses from the hand representation in left M1 while subjects observed distinct action-stimuli. Overall, results indicated that observation-induced activity in M1 reflects the level of observed force when kinematic cues of the lift (exp. 1) or cues on the hand contraction state (exp. 2) are available. Moreover, when kinematic cues and intrinsic object properties provide distinct information on the force requirements of an observed lifting action, results from experiment 3 indicated a strong preference for the use of kinematic features in mapping the force requirements of the observed action. In general, these findings support the hypothesis that the primary motor cortex contributes to action observation by mapping the muscle-related features of observed actions. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Virtual-reality techniques resolve the visual cues used by fruit flies to evaluate object distances.

    Science.gov (United States)

    Schuster, Stefan; Strauss, Roland; Götz, Karl G

    2002-09-17

    Insects can estimate distance or time-to-contact of surrounding objects from locomotion-induced changes in their retinal position and/or size. Freely walking fruit flies (Drosophila melanogaster) use the received mixture of different distance cues to select the nearest objects for subsequent visits. Conventional methods of behavioral analysis fail to elucidate the underlying data extraction. Here we demonstrate first comprehensive solutions of this problem by substituting virtual for real objects; a tracker-controlled 360 degrees panorama converts a fruit fly's changing coordinates into object illusions that require the perception of specific cues to appear at preselected distances up to infinity. An application reveals the following: (1) en-route sampling of retinal-image changes accounts for distance discrimination within a surprising range of at least 8-80 body lengths (20-200 mm). Stereopsis and peering are not involved. (2) Distance from image translation in the expected direction (motion parallax) outweighs distance from image expansion, which accounts for impact-avoiding flight reactions to looming objects. (3) The ability to discriminate distances is robust to artificially delayed updating of image translation. Fruit flies appear to interrelate self-motion and its visual feedback within a surprisingly long time window of about 2 s. The comparative distance inspection practiced in the small fruit fly deserves utilization in self-moving robots.

  9. Working Memory and Speech Recognition in Noise under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type among Adults with Hearing Loss

    Science.gov (United States)

    Miller, Christi W.; Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly

    2017-01-01

    Purpose: This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method: Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2…

  10. The relative use of proximity, shape similarity, and orientation as visual perceptual grouping cues in tufted capuchin monkeys (Cebus apella) and humans (Homo sapiens).

    Science.gov (United States)

    Spinozzi, Giovanna; De Lillo, Carlo; Truppa, Valentina; Castorina, Giulia

    2009-02-01

    Recent experimental results suggest that human and nonhuman primates differ in how they process visual information to assemble component parts into global shapes. To assess whether some of the observed differences in perceptual grouping could be accounted for by the prevalence of different grouping factors in different species, we carried out 2 experiments designed to evaluate the relative use of proximity, similarity of shape, and orientation as grouping cues in humans (Homo sapiens) and capuchin monkeys (Cebus apella). Both species showed similarly high levels of accuracy using proximity as a cue. Moreover, for both species, grouping by orientation similarity produced a lower level of performance than grouping by proximity. Differences emerged with respect to the use of shape similarity as a cue. In humans, grouping by shape similarity also proved less effective than grouping by proximity but the same was not observed in capuchins. These results suggest that there may be subtle differences between humans and capuchin monkeys in the weighting assigned to different grouping cues that may affect the way in which they combine local features into global shapes. Copyright 2009 APA, all rights reserved.

  11. Visual sexual stimuli – cue or reward? A key for interpreting brain imaging studies on human sexual behaviors

    Directory of Open Access Journals (Sweden)

    Mateusz Gola

    2016-08-01

    Full Text Available There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS for human sexuality studies, including emerging field of research on compulsive sexual behaviors. A central question in this field is whether behaviors such as extensive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned (cue and unconditioned (reward stimuli (related to reward anticipation vs reward consumption, respectively. Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as cues (conditioned stimuli or rewards (unconditioned stimuli. Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward (unconditioned stimuli, as evidenced by: 1. experience of pleasure while watching VSS, possibly accompanied by genital reaction 2. reward-related brain activity correlated with these pleasurable feelings in response to VSS, 3. a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money, and/or 4. conditioning for cues (CS predictive for. We hope that this perspective paper will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  12. Facilitated orienting underlies fearful face-enhanced gaze cueing of spatial location

    Directory of Open Access Journals (Sweden)

    Joshua M. Carlson

    2016-12-01

    Full Text Available Faces provide a platform for non-verbal communication through emotional expression and eye gaze. Fearful facial expressions are salient indicators of potential threat within the environment, which automatically capture observers’ attention. However, the degree to which fearful facial expressions facilitate attention to others’ gaze is unresolved. Given that fearful gaze indicates the location of potential threat, it was hypothesized that fearful gaze facilitates location processing. To test this hypothesis, a gaze cueing study with fearful and neutral faces assessing target localization was conducted. The task consisted of leftward, rightward, and forward/straight gaze trials. The inclusion of forward gaze trials allowed for the isolation of orienting and disengagement components of gaze-directed attention. The results suggest that both neutral and fearful gaze modulates attention through orienting and disengagement components. Fearful gaze, however, resulted in quicker orienting than neutral gaze. Thus, fearful faces enhance gaze cueing of spatial location through facilitated orienting.

  13. Cue combination encoding via contextual modulation of V1 and V2 neurons

    Directory of Open Access Journals (Sweden)

    Zarella MD

    2016-10-01

    Full Text Available Mark D Zarella, Daniel Y Ts’o Department of Neurosurgery, SUNY Upstate Medical University, Syracuse, NY, USA Abstract: Neurons in early visual cortical areas encode the local properties of a stimulus in a number of different feature dimensions such as color, orientation, and motion. It has been shown, however, that stimuli presented well beyond the confines of the classical receptive field can augment these responses in a way that emphasizes these local attributes within the greater context of the visual scene. This mechanism imparts global information to cells that are otherwise considered local feature detectors and can potentially serve as an important foundation for surface segmentation, texture representation, and figure–ground segregation. The role of early visual cortex toward these functions remains somewhat of an enigma, as it is unclear how surface segmentation cues are integrated from multiple feature dimensions. We examined the impact of orientation- and motion-defined surface segmentation cues in V1 and V2 neurons using a stimulus in which the two features are completely separable. We find that, although some cells are modulated in a cue-invariant manner, many cells are influenced by only one cue or the other. Furthermore, cells that are modulated by both cues tend to be more strongly affected when both cues are presented together than when presented individually. These results demonstrate two mechanisms by which cue combinations can enhance salience. We find that feature-specific populations are more frequently encountered in V1, while cue additivity is more prominent in V2. These results highlight how two strongly interconnected areas at different stages in the cortical hierarchy can potentially contribute to scene segmentation. Keywords: striate, extrastriate, extraclassical, texture, segmentation

  14. Is hunger important to model in fMRI visual food-cue reactivity paradigms in adults with obesity and how should this be done?

    Science.gov (United States)

    Chin, Shao-Hua; Kahathuduwa, Chanaka N; Stearns, Macy B; Davis, Tyler; Binks, Martin

    2018-01-01

    We considered 1) influence of self-reported hunger in behavioral and fMRI food-cue reactivity (fMRI-FCR) 2) optimal methods to model this. Adults (N = 32; 19-60 years; F = 21; BMI 30-39.9 kg/m 2 ) participated in an fMRI-FCR task that required rating 240 images of food and matched objects for 'appeal'. Hunger, satiety, thirst, fullness and emptiness were measured pre- and post-scan (visual analogue scales). Hunger, satiety, fullness and emptiness were combined to form a latent factor (appetite). Post-vs. pre-scores were compared using paired t-tests. In mixed-effects models, appeal/fMRI-FCR responses were regressed on image (i.e. food/objects), with random intercepts and slopes of image for functional runs nested within subjects. Each of hunger, satiety, thirst, fullness, emptiness and appetite were added as covariates in 4 forms (separate models): 1) change; 2) post- and pre-mean; 3) pre-; 4) change and pre-. Satiety decreased (Δ = -13.39, p = 0.001) and thirst increased (Δ = 11.78, p = 0.006) during the scan. Changes in other constructs were not significant (p's > 0.05). Including covariates did not influence food vs. object contrast of appeal ratings/fMRI-FCR. Significant image X covariate interactions were observed in some fMRI models. However, including these constructs did not improve the overall model fit. While some subjective, self-reported hunger, satiety and related constructs may be moderating fMRI-FCR, these constructs do not appear to be salient influences on appeal/fMRI-FCR in people with obesity undergoing fMRI. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Are olfactory cues involved in nest recognition in two social species of estrildid finches?

    Directory of Open Access Journals (Sweden)

    E Tobias Krause

    Full Text Available Reliably recognizing their own nest provides parents with a necessary skill to invest time and resources efficiently in raising their offspring and thereby maximising their own reproductive success. Studies investigating nest recognition in adult birds have focused mainly on visual cues of the nest or the nest site and acoustic cues of the nestlings. To determine whether adult songbirds also use olfaction for nest recognition, we investigated the use of olfactory nest cues for two estrildid finch species, zebra finches (Taeniopygia guttata and Bengalese finches (Lonchura striata var. domestica during the nestling and fledgling phase of their offspring. We found similar behavioural responses to nest odours in both songbird species. Females preferred the odour of their own nest over a control and avoided the foreign conspecific nest scent over a control during the nestling phase of their offspring, but when given the own odour and the foreign conspecific odour simultaneously we did not find a preference for the own nest odour. Males of both species did not show any preferences at all. The behavioural reaction to any nest odour decreased after fledging of the offspring. Our results show that only females show a behavioural response to olfactory nest cues, indicating that the use of olfactory cues for nest recognition seems to be sex-specific and dependent on the developmental stage of the offspring. Although estrildid finches are known to use visual and acoustic cues for nest recognition, the similar behavioural pattern of both species indicates that at least females gain additional information by olfactory nest cues during the nestling phase of their offspring. Thus olfactory cues might be important in general, even in situations in which visual and acoustic cues are known to be sufficient.

  16. Evidence for a shared representation of sequential cues that engage sign-tracking.

    Science.gov (United States)

    Smedley, Elizabeth B; Smith, Kyle S

    2018-06-19

    Sign-tracking is a phenomenon whereby cues that predict rewards come to acquire their own motivational value (incentive salience) and attract appetitive behavior. Typically, sign-tracking paradigms have used single auditory, visual, or lever cues presented prior to a reward delivery. Yet, real world examples of events often can be predicted by a sequence of cues. We have shown that animals will sign-track to multiple cues presented in temporal sequence, and with time develop a bias in responding toward a reward distal cue over a reward proximal cue. Further, extinction of responding to the reward proximal cue directly decreases responding to the reward distal cue. One possible explanation of this result is that serial cues become representationally linked with one another. Here we provide further support of this by showing that extinction of responding to a reward distal cue directly reduces responding to a reward proximal cue. We suggest that the incentive salience of one cue can influence the incentive salience of the other cue. Copyright © 2018. Published by Elsevier B.V.

  17. Chemical and visual communication during mate searching in rock shrimp.

    Science.gov (United States)

    Díaz, Eliecer R; Thiel, Martin

    2004-06-01

    Mate searching in crustaceans depends on different communicational cues, of which chemical and visual cues are most important. Herein we examined the role of chemical and visual communication during mate searching and assessment in the rock shrimp Rhynchocinetes typus. Adult male rock shrimp experience major ontogenetic changes. The terminal molt stages (named "robustus") are dominant and capable of monopolizing females during the mating process. Previous studies had shown that most females preferably mate with robustus males, but how these dominant males and receptive females find each other is uncertain, and is the question we examined herein. In a Y-maze designed to test for the importance of waterborne chemical cues, we observed that females approached the robustus male significantly more often than the typus male. Robustus males, however, were unable to locate receptive females via chemical signals. Using an experimental set-up that allowed testing for the importance of visual cues, we demonstrated that receptive females do not use visual cues to select robustus males, but robustus males use visual cues to find receptive females. Visual cues used by the robustus males were the tumults created by agitated aggregations of subordinate typus males around the receptive females. These results indicate a strong link between sexual communication and the mating system of rock shrimp in which dominant males monopolize receptive females. We found that females and males use different (sex-specific) communicational cues during mate searching and assessment, and that the sexual communication of rock shrimp is similar to that of the American lobster, where females are first attracted to the dominant males by chemical cues emitted by these males. A brief comparison between these two species shows that female behaviors during sexual communication contribute strongly to the outcome of mate searching and assessment.

  18. The Effects of Spatial Endogenous Pre-cueing across Eccentricities

    OpenAIRE

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas...

  19. Cues used by the black fly, Simulium annulus, for attraction to the common loon (Gavia immer).

    Science.gov (United States)

    Weinandt, Meggin L; Meyer, Michael; Strand, Mac; Lindsay, Alec R

    2012-12-01

    The parasitic relationship between a black fly, Simulium annulus, and the common loon (Gavia immer) has been considered one of the most exclusive relationships between any host species and a black fly species. To test the host specificity of this blood-feeding insect, we made a series of bird decoy presentations to black flies on loon-inhabited lakes in northern Wisconsin, U.S.A. To examine the importance of chemical and visual cues for black fly detection of and attraction to hosts, we made decoy presentations with and without chemical cues. Flies attracted to the decoys were collected, identified to species, and quantified. Results showed that S. annulus had a strong preference for common loon visual and chemical cues, although visual cues from Canada geese (Branta canadensis) and mallards (Anas platyrynchos) did attract some flies in significantly smaller numbers. © 2012 The Society for Vector Ecology.

  20. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  1. Visual Saliency Models for Text Detection in Real World.

    Directory of Open Access Journals (Sweden)

    Renwu Gao

    Full Text Available This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti's visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti's model and consists of two stages. In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti's model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

  2. Feasibility of external rhythmic cueing with the Google Glass for improving gait in people with Parkinson's disease.

    Science.gov (United States)

    Zhao, Yan; Nonnekes, Jorik; Storcken, Erik J M; Janssen, Sabine; van Wegen, Erwin E H; Bloem, Bastiaan R; Dorresteijn, Lucille D A; van Vugt, Jeroen P P; Heida, Tjitske; van Wezel, Richard J A

    2016-06-01

    New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory cueing in a laboratory setting with a custom-made application for the Google Glass. Twelve participants (mean age = 66.8; mean disease duration = 13.6 years) were tested at end of dose. We compared several key gait parameters (walking speed, cadence, stride length, and stride length variability) and freezing of gait for three types of external cues (metronome, flashing light, and optic flow) and a control condition (no-cue). For all cueing conditions, the subjects completed several walking tasks of varying complexity. Seven inertial sensors attached to the feet, legs and pelvis captured motion data for gait analysis. Two experienced raters scored the presence and severity of freezing of gait using video recordings. User experience was evaluated through a semi-open interview. During cueing, a more stable gait pattern emerged, particularly on complicated walking courses; however, freezing of gait did not significantly decrease. The metronome was more effective than rhythmic visual cues and most preferred by the participants. Participants were overall positive about the usability of the Google Glass and willing to use it at home. Thus, smartglasses like the Google Glass could be used to provide personalized mobile cueing to support gait; however, in its current form, auditory cues seemed more effective than rhythmic visual cues.

  3. Scene-Based Contextual Cueing in Pigeons

    Science.gov (United States)

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  4. Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions

    Science.gov (United States)

    Huang, Zhi; Fan, Baozheng; Song, Xiaolin

    2018-03-01

    As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.

  5. Speed on the dance floor: Auditory and visual cues for musical tempo.

    Science.gov (United States)

    London, Justin; Burger, Birgitta; Thompson, Marc; Toiviainen, Petri

    2016-02-01

    Musical tempo is most strongly associated with the rate of the beat or "tactus," which may be defined as the most prominent rhythmic periodicity present in the music, typically in a range of 1.67-2 Hz. However, other factors such as rhythmic density, mean rhythmic inter-onset interval, metrical (accentual) structure, and rhythmic complexity can affect perceived tempo (Drake, Gros, & Penel, 1999; London, 2011 Drake, Gros, & Penel, 1999; London, 2011). Visual information can also give rise to a perceived beat/tempo (Iversen, et al., 2015), and auditory and visual temporal cues can interact and mutually influence each other (Soto-Faraco & Kingstone, 2004; Spence, 2015). A five-part experiment was performed to assess the integration of auditory and visual information in judgments of musical tempo. Participants rated the speed of six classic R&B songs on a seven point scale while observing an animated figure dancing to them. Participants were presented with original and time-stretched (±5%) versions of each song in audio-only, audio+video (A+V), and video-only conditions. In some videos the animations were of spontaneous movements to the different time-stretched versions of each song, and in other videos the animations were of "vigorous" versus "relaxed" interpretations of the same auditory stimulus. Two main results were observed. First, in all conditions with audio, even though participants were able to correctly rank the original vs. time-stretched versions of each song, a song-specific tempo-anchoring effect was observed, such that sped-up versions of slower songs were judged to be faster than slowed-down versions of faster songs, even when their objective beat rates were the same. Second, when viewing a vigorous dancing figure in the A+V condition, participants gave faster tempo ratings than from the audio alone or when viewing the same audio with a relaxed dancing figure. The implications of this illusory tempo percept for cross-modal sensory integration and

  6. Two (or three) is one too many: testing the flexibility of contextual cueing with multiple target locations.

    Science.gov (United States)

    Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J

    2011-10-01

    Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.

  7. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    Directory of Open Access Journals (Sweden)

    Kirsten E Smayda

    Full Text Available Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35 and thirty-three older adults (ages 60-90 to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger

  8. Characterizing the effects of feature salience and top-down attention in the early visual system.

    Science.gov (United States)

    Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank

    2017-07-01

    The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of

  9. Oxytocin administration suppresses hypothalamic activation in response to visual food cues.

    Science.gov (United States)

    van der Klaauw, Agatha A; Ziauddeen, Hisham; Keogh, Julia M; Henning, Elana; Dachi, Sekesai; Fletcher, Paul C; Farooqi, I Sadaf

    2017-06-27

    The aim of this study was to use functional neuroimaging to investigate whether oxytocin modulates the neural response to visual food cues in brain regions involved in the control of food intake. Twenty-four normal weight volunteers received intranasal oxytocin (24 IU) or placebo in a double-blind, randomized crossover study. Measurements were made forty-five minutes after dosing. On two occasions, functional MRI (fMRI) scans were performed in the fasted state; the blood oxygen level-dependent (BOLD) response to images of high-calorie foods versus low-calorie foods was measured. Given its critical role in eating behaviour, the primary region of interest was the hypothalamus. Secondary analyses examined the parabrachial nuclei and other brain regions involved in food intake and food reward. Intranasal oxytocin administration suppressed hypothalamic activation to images of high-calorie compared to low-calorie food (P = 0.0125). There was also a trend towards suppression of activation in the parabrachial nucleus (P = 0.0683). No effects of intranasal oxytocin were seen in reward circuits or on ad libitum food intake. Further characterization of the effects of oxytocin on neural circuits in the hypothalamus is needed to establish the utility of targeting oxytocin signalling in obesity.

  10. The Effects of Attention Cueing on Visualizers' Multimedia Learning

    Science.gov (United States)

    Yang, Hui-Yu

    2016-01-01

    The present study examines how various types of attention cueing and cognitive preference affect learners' comprehension of a cardiovascular system and cognitive load. EFL learners were randomly assigned to one of four conditions: non-signal, static-blood-signal, static-blood-static-arrow-signal, and animation-signal. The results indicated that…

  11. A magnetorheological haptic cue accelerator for manual transmission vehicles

    International Nuclear Information System (INIS)

    Han, Young-Min; Noh, Kyung-Wook; Choi, Seung-Bok; Lee, Yang-Sub

    2010-01-01

    This paper proposes a new haptic cue function for manual transmission vehicles to achieve optimal gear shifting. This function is implemented on the accelerator pedal by utilizing a magnetorheological (MR) brake mechanism. By combining the haptic cue function with the accelerator pedal, the proposed haptic cue device can transmit the optimal moment of gear shifting for manual transmission to a driver without requiring the driver's visual attention. As a first step to achieve this goal, a MR fluid-based haptic device is devised to enable rotary motion of the accelerator pedal. Taking into account spatial limitations, the design parameters are optimally determined using finite element analysis to maximize the relative control torque. The proposed haptic cue device is then manufactured and its field-dependent torque and time response are experimentally evaluated. Then the manufactured MR haptic cue device is integrated with the accelerator pedal. A simple virtual vehicle emulating the operation of the engine of a passenger vehicle is constructed and put into communication with the haptic cue device. A feed-forward torque control algorithm for the haptic cue is formulated and control performances are experimentally evaluated and presented in the time domain

  12. POST-RETRIEVAL EXTINCTION ATTENUATES ALCOHOL CUE REACTIVITY IN RATS

    Science.gov (United States)

    Cofresí, Roberto U.; Lewis, Suzanne M.; Chaudhri, Nadia; Lee, Hongjoo J.; Monfils, Marie-H.; Gonzales, Rueben A.

    2017-01-01

    BACKGROUND Conditioned responses to alcohol-associated cues can hinder recovery from alcohol use disorder (AUD). Cue exposure (extinction) therapy (CET) can reduce reactivity to alcohol cues, but its efficacy is limited by phenomena such as spontaneous recovery and reinstatement that can cause a return of conditioned responding after extinction. Using a preclinical model of alcohol cue reactivity in rats, we evaluated whether the efficacy of alcohol CET could be improved by conducting CET during the memory reconsolidation window after retrieval of a cue-alcohol association. METHODS Rats were provided with intermittent access to unsweetened alcohol. Rats were then trained to predict alcohol access based on a visual cue. Next, rats were treated with either standard extinction (n=14) or post-retrieval extinction (n=13). Rats were then tested for long-term memory of extinction and susceptibility to spontaneous recovery and reinstatement. RESULTS Despite equivalent extinction, rats treated with post-retrieval extinction exhibited reduced spontaneous recovery and reinstatement relative to rats treated with standard extinction. CONCLUSIONS Post-retrieval CET shows promise for persistently attenuating the risk to relapse posed by alcohol cues in individuals with AUD. PMID:28169439

  13. On the Electrophysiological Evidence for the Capture of Visual Attention

    Science.gov (United States)

    McDonald, John J.; Green, Jessica J.; Jannati, Ali; Di Lollo, Vincent

    2013-01-01

    The presence of a salient distractor interferes with visual search. According to the salience-driven selection hypothesis, this interference is because of an initial deployment of attention to the distractor. Three event-related potential (ERP) findings have been regarded as evidence for this hypothesis: (a) salient distractors were found to…

  14. Collinearity Impairs Local Element Visual Search

    Science.gov (United States)

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  15. Acoustic cues identifying phonetic transitions for speech segmentation

    CSIR Research Space (South Africa)

    Van Niekerk, DR

    2008-11-01

    Full Text Available The quality of corpus-based text-to-speech (TTS) systems depends strongly on the consistency of boundary placements during phonetic alignments. Expert human transcribers use visually represented acoustic cues in order to consistently place...

  16. Selective attention to smoking cues in former smokers.

    Science.gov (United States)

    Rehme, Anne K; Bey, Katharina; Frommann, Ingo; Mogg, Karin; Bradley, Brendan P; Bludau, Julia; Block, Verena; Sträter, Birgitta; Schütz, Christian G; Wagner, Michael

    2018-02-01

    Repeated drug use modifies the emotional and cognitive processing of drug-associated cues. These changes are supposed to persist even after prolonged abstinence. Several studies demonstrated that smoking cues selectively attract the attention of smokers, but empirical evidence for such an attentional bias among successful quitters is inconclusive. Here, we investigated whether attentional biases persist after smoking cessation. Thirty-eight former smokers, 34 current smokers, and 29 non-smokers participated in a single experimental session. We used three measures of attentional bias for smoking stimuli: A visual probe task with short (500ms) and long (2000ms) picture stimulus durations, and a modified Stroop task with smoking-related and neutral words. Former smokers and current smokers, as compared to non-smokers, showed an attentional bias in visual orienting to smoking pictures in the 500ms condition of the visual probe task. The Stroop interference index of smoking words was negatively related to nicotine dependence in current smokers. Former smokers and mildly dependent smokers, as compared to non-smokers, showed increased interference by smoking words in the Stroop task. Neither current nor former smokers showed an attentional bias in maintained attention (2000ms visual probe task). In conclusion, even after prolonged abstinence smoking cues retain incentive salience in former smokers, who differed from non-smokers on two attentional bias indices. Attentional biases in former smokers operate mainly in early involuntary rather than in controlled processing, and may represent a vulnerability factor for relapse. Therefore, smoking cessation programs should strengthen self-control abilities to prevent relapses. Copyright © 2017 Elsevier B.V. and ECNP. All rights reserved.

  17. The time course of protecting a visual memory representation from perceptual interference

    Directory of Open Access Journals (Sweden)

    Dirk evan Moorselaar

    2015-01-01

    Full Text Available Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the SOA between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed.

  18. The time course of protecting a visual memory representation from perceptual interference

    Science.gov (United States)

    van Moorselaar, Dirk; Gunseli, Eren; Theeuwes, Jan; N. L. Olivers, Christian

    2015-01-01

    Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the stimulus onset asynchrony (SOA) between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed. PMID:25628555

  19. A treat for the eyes. An eye-tracking study on children's attention to unhealthy and healthy food cues in media content.

    Science.gov (United States)

    Spielvogel, Ines; Matthes, Jörg; Naderer, Brigitte; Karsay, Kathrin

    2018-06-01

    Based on cue reactivity theory, food cues embedded in media content can lead to physiological and psychological responses in children. Research suggests that unhealthy food cues are represented more extensively and interactively in children's media environments than healthy ones. However, it is not clear to this date whether children react differently to unhealthy compared to healthy food cues. In an experimental study with 56 children (55.4% girls; M age  = 8.00, SD = 1.58), we used eye-tracking to determine children's attention to unhealthy and healthy food cues embedded in a narrative cartoon movie. Besides varying the food type (i.e., healthy vs. unhealthy), we also manipulated the integration levels of food cues with characters (i.e., level of food integration; no interaction vs. handling vs. consumption), and we assessed children's individual susceptibility factors by measuring the impact of their hunger level. Our results indicated that unhealthy food cues attract children's visual attention to a larger extent than healthy cues. However, their initial visual interest did not differ between unhealthy and healthy food cues. Furthermore, an increase in the level of food integration led to an increase in visual attention. Our findings showed no moderating impact of hunger. We conclude that especially unhealthy food cues with an interactive connection trigger cue reactivity in children. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Shifting attention among working memory representations: testing cue type, awareness, and strategic control.

    Science.gov (United States)

    Berryhill, Marian E; Richmond, Lauren L; Shay, Cara S; Olson, Ingrid R

    2012-01-01

    It is well known that visual working memory (VWM) performance is modulated by attentional cues presented during encoding. Interestingly, retrospective cues presented after encoding, but prior to the test phase also improve performance. This improvement in performance is termed the retro-cue benefit. We investigated whether the retro-cue benefit is sensitive to cue type, whether participants were aware of their improvement in performance due to the retro-cue, and whether the effect was under strategic control. Experiment 1 compared the potential cueing benefits of abrupt onset retro-cues relying on bottom-up attention, number retro-cues relying on top-down attention, and arrow retro-cues, relying on a mixture of both. We found a significant retro-cue effect only for arrow retro-cues. In Experiment 2, we tested participants' awareness of their use of the informative retro-cue and found that they were aware of their improved performance. In Experiment 3, we asked whether participants have strategic control over the retro-cue. The retro-cue was difficult to ignore, suggesting that strategic control is low. The retro-cue effect appears to be within conscious awareness but not under full strategic control.

  1. Hunger modulates behavioral disinhibition and attention allocation to food-associated cues in normal-weight controls.

    Science.gov (United States)

    Loeber, Sabine; Grosshans, Martin; Herpertz, Stephan; Kiefer, Falk; Herpertz, Sabine C

    2013-12-01

    Overeating, weight gain and obesity are considered as a major health problem in Western societies. At present, an impairment of response inhibition and a biased salience attribution to food-associated stimuli are considered as important factors associated with weight gain. However, recent findings suggest that the association between an impaired response inhibition and salience attribution and weight gain might be modulated by other factors. Thus, hunger might cause food-associated cues to be perceived as more salient and rewarding and might be associated with an impairment of response inhibition. However, at present, little is known how hunger interacts with these processes. Thus, the aim of the present study was to investigate whether hunger modulates response inhibition and attention allocation towards food-associated stimuli in normal-weight controls. A go-/nogo task with food-associated and control words and a visual dot-probe task with food-associated and control pictures were administered to 48 normal-weight participants (mean age 24.5 years, range 19-40; mean BMI 21.6, range 18.5-25.4). Hunger was assessed twofold using a self-reported measure of hunger and a measurement of the blood glucose level. Our results indicated that self-reported hunger affected behavioral response inhibition in the go-/nogo task. Thus, hungry participants committed significantly more commission errors when food-associated stimuli served as distractors compared to when control stimuli were the distractors. This effect was not observed in sated participants. In addition, we found that self-reported hunger was associated with a lower number of omission errors in response to food-associated stimuli indicating a higher salience of these stimuli. Low blood glucose level was not associated with an impairment of response inhibition. However, our results indicated that the blood glucose level was associated with an attentional bias towards food-associated cues in the visual dot probe task

  2. Functional neuroimaging studies in addiction: multisensory drug stimuli and neural cue reactivity.

    Science.gov (United States)

    Yalachkov, Yavor; Kaiser, Jochen; Naumer, Marcus J

    2012-02-01

    Neuroimaging studies on cue reactivity have substantially contributed to the understanding of addiction. In the majority of studies drug cues were presented in the visual modality. However, exposure to conditioned cues in real life occurs often simultaneously in more than one sensory modality. Therefore, multisensory cues should elicit cue reactivity more consistently than unisensory stimuli and increase the ecological validity and the reliability of brain activation measurements. This review includes the data from 44 whole-brain functional neuroimaging studies with a total of 1168 subjects (812 patients and 356 controls). Correlations between neural cue reactivity and clinical covariates such as craving have been reported significantly more often for multisensory than unisensory cues in the motor cortex, insula and posterior cingulate cortex. Thus, multisensory drug cues are particularly effective in revealing brain-behavior relationships in neurocircuits of addiction responsible for motivation, craving awareness and self-related processing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. E-gaze : create gaze communication for peoplewith visual disability

    NARCIS (Netherlands)

    Qiu, S.; Osawa, H.; Hu, J.; Rauterberg, G.W.M.

    2015-01-01

    Gaze signals are frequently used by the sighted in social interactions as visual cues. However, these signals and cues are hardly accessible for people with visual disability. A conceptual design of E-Gaze glasses is proposed, assistive to create gaze communication between blind and sighted people

  4. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish

    OpenAIRE

    Darmaillacq, Anne-Sophie; Mezrai, Nawel; O'Brien, Caitlin E.; Dickel, Ludovic

    2017-01-01

    International audience; Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e-vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavio...

  5. Neural correlates of contextual cueing are modulated by explicit learning.

    Science.gov (United States)

    Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A

    2011-10-01

    Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  7. Magic and Misdirection: The Influence of Social Cues on the Allocation of Visual Attention While Watching a Cups-and-Balls Routine

    Directory of Open Access Journals (Sweden)

    Andreas eHergovich

    2016-05-01

    Full Text Available In recent years, a body of research that regards the scientific study of magic performances as a promising method of investigating psychological phenomena in an ecologically valid setting has emerged. Seemingly contradictory findings concerning the ability of social cues to strengthen a magic trick’s effectiveness have been published. In this experiment, an effort was made to disentangle the unique influence of different social and physical triggers of attentional misdirection on observers’ overt and covert attention. The ability of 120 participants to detect the mechanism of a cups-and-balls trick was assessed, and their visual fixations were recorded using an eye-tracker while they were watching the routine. All the investigated techniques of misdirection, including sole usage of social cues, were shown to increase the probability of missing the trick mechanism. Depending on the technique of misdirection used, very different gaze patterns were observed. A combination of social and physical techniques of misdirection influenced participants’ overt attention most effectively.

  8. Toward a New Theory for Selecting Instructional Visuals.

    Science.gov (United States)

    Croft, Richard S.; Burton, John K.

    This paper provides a rationale for the selection of illustrations and visual aids for the classroom. The theories that describe the processing of visuals are dual coding theory and cue summation theory. Concept attainment theory offers a basis for selecting which cues are relevant for any learning task which includes a component of identification…

  9. Visual Discomfort and Depth-of-Field

    Directory of Open Access Journals (Sweden)

    Louise O'Hare

    2013-05-01

    Full Text Available Visual discomfort has been reported for certain visual stimuli and under particular viewing conditions, such as stereoscopic viewing. In stereoscopic viewing, visual discomfort can be caused by a conflict between accommodation and convergence cues that may specify different distances in depth. Earlier research has shown that depth-of-field, which is the distance range in depth in the scene that is perceived to be sharp, influences both the perception of egocentric distance to the focal plane, and the distance range in depth between objects in the scene. Because depth-of-field may also be in conflict with convergence and the accommodative state of the eyes, we raised the question of whether depth-of-field affects discomfort when viewing stereoscopic photographs. The first experiment assessed whether discomfort increases when depth-of-field is in conflict with coherent accommodation–convergence cues to distance in depth. The second experiment assessed whether depth-of-field influences discomfort from a pre-existing accommodation–convergence conflict. Results showed no effect of depth-of-field on visual discomfort. These results suggest therefore that depth-of-field can be used as a cue to depth without inducing discomfort in the viewer, even when cue conflicts are large.

  10. Estimating location without external cues.

    Directory of Open Access Journals (Sweden)

    Allen Cheung

    2014-10-01

    Full Text Available The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.

  11. A configural dominant account of contextual cueing: Configural cues are stronger than colour cues.

    Science.gov (United States)

    Kunar, Melina A; John, Rebecca; Sweetman, Hollie

    2014-01-01

    Previous work has shown that reaction times to find a target in displays that have been repeated are faster than those for displays that have never been seen before. This learning effect, termed "contextual cueing" (CC), has been shown using contexts such as the configuration of the distractors in the display and the background colour. However, it is not clear how these two contexts interact to facilitate search. We investigated this here by comparing the strengths of these two cues when they appeared together. In Experiment 1, participants searched for a target that was cued by both colour and distractor configural cues, compared with when the target was only predicted by configural information. The results showed that the addition of a colour cue did not increase contextual cueing. In Experiment 2, participants searched for a target that was cued by both colour and distractor configuration compared with when the target was only cued by colour. The results showed that adding a predictive configural cue led to a stronger CC benefit. Experiments 3 and 4 tested the disruptive effects of removing either a learned colour cue or a learned configural cue and whether there was cue competition when colour and configural cues were presented together. Removing the configural cue was more disruptive to CC than removing colour, and configural learning was shown to overshadow the learning of colour cues. The data support a configural dominant account of CC, where configural cues act as the stronger cue in comparison to colour when they are presented together.

  12. Assessment of rival males through the use of multiple sensory cues in the fruitfly Drosophila pseudoobscura.

    Directory of Open Access Journals (Sweden)

    Chris P Maguire

    Full Text Available Environments vary stochastically, and animals need to behave in ways that best fit the conditions in which they find themselves. The social environment is particularly variable, and responding appropriately to it can be vital for an animal's success. However, cues of social environment are not always reliable, and animals may need to balance accuracy against the risk of failing to respond if local conditions or interfering signals prevent them detecting a cue. Recent work has shown that many male Drosophila fruit flies respond to the presence of rival males, and that these responses increase their success in acquiring mates and fathering offspring. In Drosophila melanogaster males detect rivals using auditory, tactile and olfactory cues. However, males fail to respond to rivals if any two of these senses are not functioning: a single cue is not enough to produce a response. Here we examined cue use in the detection of rival males in a distantly related Drosophila species, D. pseudoobscura, where auditory, olfactory, tactile and visual cues were manipulated to assess the importance of each sensory cue singly and in combination. In contrast to D. melanogaster, male D. pseudoobscura require intact olfactory and tactile cues to respond to rivals. Visual cues were not important for detecting rival D. pseudoobscura, while results on auditory cues appeared puzzling. This difference in cue use in two species in the same genus suggests that cue use is evolutionarily labile, and may evolve in response to ecological or life history differences between species.

  13. Current superimposition variable flux reluctance motor with 8 salient poles

    Science.gov (United States)

    Takahara, Kazuaki; Hirata, Katsuhiro; Niguchi, Noboru; Kohara, Akira

    2017-12-01

    We propose a current superimposition variable flux reluctance motor for a traction motor of electric vehicles and hybrid electric vehicles, which consists of 10 salient poles in the rotor and 12 slots in the stator. However, iron losses of this motor in high rotation speed ranges is large because the number of salient poles is large. In this paper, we propose a current superimposition variable flux reluctance motor that consists of 8 salient poles and 12 slots. The characteristics of the 10-pole-12-slot and 8-pole-12-slot current superimposition variable flux reluctance motors are compared using finite element analysis under vector control.

  14. Current superimposition variable flux reluctance motor with 8 salient poles

    Directory of Open Access Journals (Sweden)

    Takahara Kazuaki

    2017-12-01

    Full Text Available We propose a current superimposition variable flux reluctance motor for a traction motor of electric vehicles and hybrid electric vehicles, which consists of 10 salient poles in the rotor and 12 slots in the stator. However, iron losses of this motor in high rotation speed ranges is large because the number of salient poles is large. In this paper, we propose a current superimposition variable flux reluctance motor that consists of 8 salient poles and 12 slots. The characteristics of the 10-pole-12-slot and 8-pole-12-slot current superimposition variable flux reluctance motors are compared using finite element analysis under vector control.

  15. Working memory can enhance unconscious visual perception.

    Science.gov (United States)

    Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying

    2012-06-01

    We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.

  16. Subliminal Cues While Teaching: HCI Technique for Enhanced Learning

    Directory of Open Access Journals (Sweden)

    Pierre Chalfoun

    2011-01-01

    Full Text Available This paper presents results from an empirical study conducted with a subliminal teaching technique aimed at enhancing learner's performance in Intelligent Systems through the use of physiological sensors. This technique uses carefully designed subliminal cues (positive and miscues (negative and projects them under the learner's perceptual visual threshold. A positive cue, called answer cue, is a hint aiming to enhance the learner's inductive reasoning abilities and projected in a way to help them figure out the solution faster but more importantly better. A negative cue, called miscue, is also used and aims at obviously at the opposite (distract the learner or lead them to the wrong conclusion. The latest obtained results showed that only subliminal cues, not miscues, could significantly increase learner performance and intuition in a logic-based problem-solving task. Nonintrusive physiological sensors (EEG for recording brainwaves, blood volume pressure to compute heart rate and skin response to record skin conductivity were used to record affective and cerebral responses throughout the experiment. The descriptive analysis, combined with the physiological data, provides compelling evidence for the positive impact of answer cues on reasoning and intuitive decision making in a logic-based problem-solving paradigm.

  17. Food and drug cues activate similar brain regions: a meta-analysis of functional MRI studies.

    Science.gov (United States)

    Tang, D W; Fellows, L K; Small, D M; Dagher, A

    2012-06-06

    In healthy individuals, food cues can trigger hunger and feeding behavior. Likewise, smoking cues can trigger craving and relapse in smokers. Brain imaging studies report that structures involved in appetitive behaviors and reward, notably the insula, striatum, amygdala and orbital frontal cortex, tend to be activated by both visual food and smoking cues. Here, by carrying out a meta-analysis of human neuro-imaging studies, we investigate the neural network activated by: 1) food versus neutral cues (14 studies, 142 foci) 2) smoking versus neutral cues (15 studies, 176 foci) 3) smoking versus neutral cues when correlated with craving scores (7 studies, 108 foci). PubMed was used to identify cue-reactivity imaging studies that compared brain response to visual food or smoking cues to neutral cues. Fourteen articles were identified for the food meta-analysis and fifteen articles were identified for the smoking meta-analysis. Six articles were identified for the smoking cue correlated with craving analysis. Meta-analyses were carried out using activation likelihood estimation. Food cues were associated with increased blood oxygen level dependent (BOLD) response in the left amygdala, bilateral insula, bilateral orbital frontal cortex, and striatum. Smoking cues were associated with increased BOLD signal in the same areas, with the exception of the insula. However, the smoking meta-analysis of brain maps correlating cue-reactivity with subjective craving did identify the insula, suggesting that insula activation is only found when craving levels are high. The brain areas identified here are involved in learning, memory and motivation, and their cue-induced activity is an index of the incentive salience of the cues. Using meta-analytic techniques to combine a series of studies, we found that food and smoking cues activate comparable brain networks. There is significant overlap in brain regions responding to conditioned cues associated with natural and drug rewards

  18. Amodal brain activation and functional connectivity in response to high-energy-density food cues in obesity.

    Science.gov (United States)

    Carnell, Susan; Benson, Leora; Pantazatos, Spiro P; Hirsch, Joy; Geliebter, Allan

    2014-11-01

    The obesogenic environment is pervasive, yet only some people become obese. The aim was to investigate whether obese individuals show differential neural responses to visual and auditory food cues, independent of cue modality. Obese (BMI 29-41, n = 10) and lean (BMI 20-24, n = 10) females underwent fMRI scanning during presentation of auditory (spoken word) and visual (photograph) cues representing high-energy-density (ED) and low-ED foods. The effect of obesity on whole-brain activation, and on functional connectivity with the midbrain/VTA, was examined. Obese compared with lean women showed greater modality-independent activation of the midbrain/VTA and putamen in response to high-ED (vs. low-ED) cues, as well as relatively greater functional connectivity between the midbrain/VTA and cerebellum (P food cues within the midbrain/VTA and putamen, and altered functional connectivity between the midbrain/VTA and cerebellum, could contribute to excessive food intake in obese individuals. © 2014 The Obesity Society.

  19. Default mode network deactivation to smoking cue relative to food cue predicts treatment outcome in nicotine use disorder.

    Science.gov (United States)

    Wilcox, Claire E; Claus, Eric D; Calhoun, Vince D; Rachakonda, Srinivas; Littlewood, Rae A; Mickey, Jessica; Arenella, Pamela B; Goodreau, Natalie; Hutchison, Kent E

    2018-01-01

    Identifying predictors of treatment outcome for nicotine use disorders (NUDs) may help improve efficacy of established treatments, like varenicline. Brain reactivity to drug stimuli predicts relapse risk in nicotine and other substance use disorders in some studies. Activity in the default mode network (DMN) is affected by drug cues and other palatable cues, but its clinical significance is unclear. In this study, 143 individuals with NUD (male n = 91, ages 18-55 years) received a functional magnetic resonance imaging scan during a visual cue task during which they were presented with a series of smoking-related or food-related video clips prior to randomization to treatment with varenicline (n = 80) or placebo. Group independent components analysis was utilized to isolate the DMN, and temporal sorting was used to calculate the difference between the DMN blood-oxygen-level dependent signal during smoke cues and that during food cues for each individual. Food cues were associated with greater deactivation compared with smoke cues in the DMN. In correcting for baseline smoking and other clinical variables, which have been shown to be related to treatment outcome in previous work, a less positive Smoke - Food difference score predicted greater smoking at 6 and 12 weeks when both treatment groups were combined (P = 0.005, β = -0.766). An exploratory analysis of executive control and salience networks demonstrated that a more positive Smoke - Food difference score for executive control network predicted a more robust response to varenicline relative to placebo. These findings provide further support to theories that brain reactivity to palatable cues, and in particular in DMN, may have a direct clinical relevance in NUD. © 2017 Society for the Study of Addiction.

  20. Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality

    Science.gov (United States)

    Hua, Hong

    2017-05-01

    Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).

  1. Perceptual evaluation of visual alerts in surveillance videos

    Science.gov (United States)

    Rogowitz, Bernice E.; Topkara, Mercan; Pfeiffer, William; Hampapur, Arun

    2015-03-01

    Visual alerts are commonly used in video monitoring and surveillance systems to mark events, presumably making them more salient to human observers. Surprisingly, the effectiveness of computer-generated alerts in improving human performance has not been widely studied. To address this gap, we have developed a tool for simulating different alert parameters in a realistic visual monitoring situation, and have measured human detection performance under conditions that emulated different set-points in a surveillance algorithm. In the High-Sensitivity condition, the simulated alerts identified 100% of the events with many false alarms. In the Lower-Sensitivity condition, the simulated alerts correctly identified 70% of the targets, with fewer false alarms. In the control condition, no simulated alerts were provided. To explore the effects of learning, subjects performed these tasks in three sessions, on separate days, in a counterbalanced, within subject design. We explore these results within the context of cognitive models of human attention and learning. We found that human observers were more likely to respond to events when marked by a visual alert. Learning played a major role in the two alert conditions. In the first session, observers generated almost twice as many False Alarms as in the No-Alert condition, as the observers responded pre-attentively to the computer-generated false alarms. However, this rate dropped equally dramatically in later sessions, as observers learned to discount the false cues. Highest observer Precision, Hits/(Hits + False Alarms), was achieved in the High Sensitivity condition, but only after training. The successful evaluation of surveillance systems depends on understanding human attention and performance.

  2. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    Science.gov (United States)

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  3. Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Xi Gong

    2018-03-01

    Full Text Available Remote sensing (RS scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises. This paper proposes a deep salient feature based anti-noise transfer network (DSFATN method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM, the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially.

  4. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  5. Visually guided adjustments of body posture in the roll plane

    OpenAIRE

    Tarnutzer, A A; Bockisch, C J; Straumann, D

    2013-01-01

    Body position relative to gravity is continuously updated to prevent falls. Therefore, the brain integrates input from the otoliths, truncal graviceptors, proprioception and vision. Without visual cues estimated direction of gravity mainly depends on otolith input and becomes more variable with increasing roll-tilt. Contrary, the discrimination threshold for object orientation shows little modulation with varying roll orientation of the visual stimulus. Providing earth-stationary visual cues,...

  6. Action Planning Mediates Guidance of Visual Attention from Working Memory.

    Science.gov (United States)

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2015-01-01

    Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.

  7. Negative emotional stimuli reduce contextual cueing but not response times in inefficient search.

    Science.gov (United States)

    Kunar, Melina A; Watson, Derrick G; Cole, Louise; Cox, Angeline

    2014-02-01

    In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search.

  8. Influence of cueing on the preparation and execution of untrained and trained complex motor responses

    Directory of Open Access Journals (Sweden)

    S.R. Alouche

    2012-05-01

    Full Text Available This study investigated the influence of cueing on the performance of untrained and trained complex motor responses. Healthy adults responded to a visual target by performing four sequential movements (complex response or a single movement (simple response of their middle finger. A visual cue preceded the target by an interval of 300, 1000, or 2000 ms. In Experiment 1, the complex and simple responses were not previously trained. During the testing session, the complex response pattern varied on a trial-by-trial basis following the indication provided by the visual cue. In Experiment 2, the complex response and the simple response were extensively trained beforehand. During the testing session, the trained complex response pattern was performed in all trials. The latency of the untrained and trained complex responses decreased from the short to the medium and long cue-target intervals. The latency of the complex response was longer than that of the simple response, except in the case of the trained responses and the long cue-target interval. These results suggest that the preparation of untrained complex responses cannot be completed in advance, this being possible, however, for trained complex responses when enough time is available. The duration of the 1st submovement, 1st pause and 2nd submovement of the untrained and the trained complex responses increased from the short to the long cue-target interval, suggesting that there is an increase of online programming of the response possibly related to the degree of certainty about the moment of target appearance.

  9. Visual discomfort and depth-of-field

    NARCIS (Netherlands)

    O'Hare, L.; Zhang, T.; Nefs, H.T.; Hibbard, P.B.

    2013-01-01

    Visual discomfort has been reported for certain visual stimuli and under particular viewing conditions, such as stereoscopic viewing. In stereoscopic viewing, visual discomfort can be caused by a conflict between accommodation and convergence cues that may specify different distances in depth.

  10. Motivation and short-term memory in visual search: Attention's accelerator revisited.

    Science.gov (United States)

    Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton

    2018-05-01

    A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Individual differences in using geometric and featural cues to maintain spatial orientation: cue quantity and cue ambiguity are more important than cue type.

    Science.gov (United States)

    Kelly, Jonathan W; McNamara, Timothy P; Bodenheimer, Bobby; Carr, Thomas H; Rieser, John J

    2009-02-01

    Two experiments explored the role of environmental cues in maintaining spatial orientation (sense of self-location and direction) during locomotion. Of particular interest was the importance of geometric cues (provided by environmental surfaces) and featural cues (nongeometric properties provided by striped walls) in maintaining spatial orientation. Participants performed a spatial updating task within virtual environments containing geometric or featural cues that were ambiguous or unambiguous indicators of self-location and direction. Cue type (geometric or featural) did not affect performance, but the number and ambiguity of environmental cues did. Gender differences, interpreted as a proxy for individual differences in spatial ability and/or experience, highlight the interaction between cue quantity and ambiguity. When environmental cues were ambiguous, men stayed oriented with either one or two cues, whereas women stayed oriented only with two. When environmental cues were unambiguous, women stayed oriented with one cue.

  12. Beating time: How ensemble musicians' cueing gestures communicate beat position and tempo.

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2018-01-01

    Ensemble musicians typically exchange visual cues to coordinate piece entrances. "Cueing-in" gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians' cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano-piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano-violin). Duos performed short passages as their head and (for violinists) bowing hand movements were tracked with accelerometers and Kinect sensors. Performers alternated between leader/follower roles; leaders heard a tempo via headphones and cued their partner in nonverbally. Violin duos synchronized more successfully than either piano duos or piano-violin duos, possibly because violinists were more experienced in ensemble playing than pianists. Peak acceleration indicated beat position in leaders' head-nodding gestures. Gesture duration and periodicity in leaders' head and bowing hand gestures indicated tempo. The results show that the spatio-temporal characteristics of cueing-in gestures guide beat perception, enabling synchronization with visual gestures that follow a range of spatial trajectories.

  13. Overall gloss evaluation in the presence of multiple cues to surface glossiness.

    Science.gov (United States)

    Leloup, Frédéric B; Pointer, Michael R; Dutré, Philip; Hanselaer, Peter

    2012-06-01

    Human observers use the information offered by various visual cues when evaluating the glossiness of a surface. Several studies have demonstrated the effect of each single cue to glossiness, but little has been reported on how multiple cues are integrated for the perception of surface gloss. This paper reports on a psychophysical study with real stimuli that are different regarding multiple visual gloss criteria. Four samples were presented to 15 observers under different conditions of illumination in a light booth, resulting in a series of 16 stimuli. Through pairwise comparisons, an overall gloss scale was derived, from which it could be concluded that both differences in the distinctness of the reflected image and differences in luminance affect gloss perception. However, an investigation of the observers' strategy to evaluate gloss indicated a dichotomy among observers. One group of observers used the distinctness-of-image as a principal cue to glossiness, while the second group evaluated gloss primarily from differences in luminance of both the specular highlight and the diffuse background. It could therefore be questioned whether surface gloss can be characterized with one single quantity, or that a set of quantities is necessary to describe the gloss differences between objects.

  14. Bayesian integration of position and orientation cues in perception of biological and non-biological dynamic forms

    Directory of Open Access Journals (Sweden)

    Steven Matthew Thurman

    2014-02-01

    Full Text Available Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic

  15. Effect of menstrual cycle phase on corticolimbic brain activation by visual food cues.

    Science.gov (United States)

    Frank, Tamar C; Kim, Ginah L; Krzemien, Alicja; Van Vugt, Dean A

    2010-12-02

    Food intake is decreased during the late follicular phase and increased in the luteal phase of the menstrual cycle. While a changing ovarian steroid milieu is believed to be responsible for this behavior, the specific mechanisms involved are poorly understood. Brain activity in response to visual food stimuli was compared during the estrogen dominant peri-ovulatory phase and the progesterone dominant luteal phase of the menstrual cycle. Twelve women underwent functional magnetic resonance imaging during the peri-ovulatory and luteal phases of the menstrual cycle in a counterbalanced fashion. Whole brain T2* images were collected while subjects viewed pictures of high calorie (HC) foods, low calorie (LC) foods, and control (C) pictures presented in a block design. Blood oxygen level dependent (BOLD) signal in the late follicular phase and luteal phase was determined for the contrasts HC-C, LC-C, HC-LC, and LC-HC. Both HC and LC stimuli activated numerous corticolimbic brain regions in the follicular phase, whereas only HC stimuli were effective in the luteal phase. Activation of the nucleus accumbens (NAc), amygdala, and hippocampus in response to the HC-C contrast and the hippocampus in response to the LC-C contrast was significantly increased in the late follicular phase compared to the luteal phase. Activation of the orbitofrontal cortex and mid cingulum in response to the HC-LC contrast was greater during the luteal phase. These results demonstrate for the first time that brain responses to visual food cues are influenced by menstrual cycle phase. We postulate that ovarian steroid modulation of the corticolimbic brain contributes to changes in ingestive behavior during the menstrual cycle. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Altered Brain Reactivity to Game Cues After Gaming Experience.

    Science.gov (United States)

    Ahn, Hyeon Min; Chung, Hwan Jun; Kim, Sang Hee

    2015-08-01

    Individuals who play Internet games excessively show elevated brain reactivity to game-related cues. This study attempted to test whether this elevated cue reactivity observed in game players is a result of repeated exposure to Internet games. Healthy young adults without a history of excessively playing Internet games were recruited, and they were instructed to play an online Internet game for 2 hours/day for five consecutive weekdays. Two control groups were used: the drama group, which viewed a fantasy TV drama, and the no-exposure group, which received no systematic exposure. All participants performed a cue reactivity task with game, drama, and neutral cues in the brain scanner, both before and after the exposure sessions. The game group showed an increased reactivity to game cues in the right ventrolateral prefrontal cortex (VLPFC). The degree of VLPFC activation increase was positively correlated with the self-reported increase in desire for the game. The drama group showed an increased cue reactivity in response to the presentation of drama cues in the caudate, posterior cingulate, and precuneus. The results indicate that exposure to either Internet games or TV dramas elevates the reactivity to visual cues associated with the particular exposure. The exact elevation patterns, however, appear to differ depending on the type of media experienced. How changes in each of the regions contribute to the progression to pathological craving warrants a future longitudinal study.

  17. Visualizing Summary Statistics and Uncertainty

    KAUST Repository

    Potter, K.; Kniss, J.; Riesenfeld, R.; Johnson, C.R.

    2010-01-01

    The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use-case of these new plots is presented, demonstrating their ability to present high-level overviews as well as detailed insight into the salient features of the underlying data distribution. © 2010 The Eurographics Association and Blackwell Publishing Ltd.

  18. Visualizing Summary Statistics and Uncertainty

    KAUST Repository

    Potter, K.

    2010-08-12

    The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use-case of these new plots is presented, demonstrating their ability to present high-level overviews as well as detailed insight into the salient features of the underlying data distribution. © 2010 The Eurographics Association and Blackwell Publishing Ltd.

  19. Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments

    Science.gov (United States)

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…

  20. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    Science.gov (United States)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input

  1. Attentional Bias for Uncertain Cues of Shock in Human Fear Conditioning: Evidence for Attentional Learning Theory

    Science.gov (United States)

    Koenig, Stephan; Uengoer, Metin; Lachnit, Harald

    2017-01-01

    We conducted a human fear conditioning experiment in which three different color cues were followed by an aversive electric shock on 0, 50, and 100% of the trials, and thus induced low (L), partial (P), and high (H) shock expectancy, respectively. The cues differed with respect to the strength of their shock association (L H). During conditioning we measured pupil dilation and ocular fixations to index differences in the attentional processing of the cues. After conditioning, the shock-associated colors were introduced as irrelevant distracters during visual search for a shape target while shocks were no longer administered and we analyzed the cues’ potential to capture and hold overt attention automatically. Our findings suggest that fear conditioning creates an automatic attention bias for the conditioned cues that depends on their correlation with the aversive outcome. This bias was exclusively linked to the strength of the cues’ shock association for the early attentional processing of cues in the visual periphery, but additionally was influenced by the uncertainty of the shock prediction after participants fixated on the cues. These findings are in accord with attentional learning theories that formalize how associative learning shapes automatic attention. PMID:28588466

  2. Local spectral anisotropy is a valid cue for figure–ground organization in natural scenes

    OpenAIRE

    Ramenahalli, Sudarshan; Mihalas, Stefan; Niebur, Ernst

    2014-01-01

    An important step in the process of understanding visual scenes is its organization in different perceptual objects which requires figure-ground segregation. The determination which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer) is made through a combination of global cues, like convexity, and local cues, like T-junctions. We here focus on a novel set of local cues in the intensity patterns along occlusion boundaries which...

  3. Reward processing in the value-driven attention network: reward signals tracking cue identity and location.

    Science.gov (United States)

    Anderson, Brian A

    2017-03-01

    Through associative reward learning, arbitrary cues acquire the ability to automatically capture visual attention. Previous studies have examined the neural correlates of value-driven attentional orienting, revealing elevated activity within a network of brain regions encompassing the visual corticostriatal loop [caudate tail, lateral occipital complex (LOC) and early visual cortex] and intraparietal sulcus (IPS). Such attentional priority signals raise a broader question concerning how visual signals are combined with reward signals during learning to create a representation that is sensitive to the confluence of the two. This study examines reward signals during the cued reward training phase commonly used to generate value-driven attentional biases. High, compared with low, reward feedback preferentially activated the value-driven attention network, in addition to regions typically implicated in reward processing. Further examination of these reward signals within the visual system revealed information about the identity of the preceding cue in the caudate tail and LOC, and information about the location of the preceding cue in IPS, while early visual cortex represented both location and identity. The results reveal teaching signals within the value-driven attention network during associative reward learning, and further suggest functional specialization within different regions of this network during the acquisition of an integrated representation of stimulus value. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Depression, not PTSD, is associated with attentional biases for emotional visual cues in early traumatized individuals with PTSD

    Directory of Open Access Journals (Sweden)

    Charlotte Elisabeth Wittekind

    2015-01-01

    Full Text Available Using variants of the emotional Stroop task (EST, a large number of studies demonstrated attentional biases in individuals with PTSD across different types of trauma. However, the specificity and robustness of the emotional Stroop effect in PTSD were questioned recently. In particular, the paradigm cannot disentangle underlying cognitive mechanisms. Transgenerational studies provide evidence that consequences of trauma are not limited to the traumatized people, but extend to close relatives, especially the children. To further investigate attentional biases in PTSD and to shed light on the underlying cognitive mechanism(s, a spatial-cueing paradigm with pictures of different emotional valence (neutral, anxiety, depression, trauma was administered to individuals displaced as children during World War II with (n = 22 and without PTSD (n = 26 as well as to nontraumatized controls (n = 22. To assess whether parental PTSD is associated with biased information processing in children, each one adult offspring was also included in the study. PTSD was not associated with attentional biases for trauma-related stimuli. There was no evidence for a transgenerational transmission of biased information processing. However, when samples were regrouped based on current depression, a reduced inhibition of return (IOR effect emerged for depression-related cues. IOR refers to the phenomenon that with longer intervals between cue and target the validity effect is reversed: uncued locations are associated with shorter and cued locations with longer RTs. The results diverge from EST studies and demonstrate that findings on attentional biases yield equivocal results across different paradigms. Attentional biases for trauma-related material may only appear for verbal but not for visual stimuli in an elderly population with childhood trauma with PTSD. Future studies should more closely investigate whether findings from younger trauma populations also manifest in older

  5. Differentiating Visual from Response Sequencing during Long-term Skill Learning.

    Science.gov (United States)

    Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy

    2017-01-01

    The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.

  6. Effect of Exogenous Cues on Covert Spatial Orienting in Deaf and Normal Hearing Individuals.

    Science.gov (United States)

    Prasad, Seema Gorur; Patil, Gouri Shanker; Mishra, Ramesh Kumar

    2015-01-01

    Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf.

  7. Dorsal and ventral working memory-related brain areas support distinct processes in contextual cueing.

    Science.gov (United States)

    Manginelli, Angela A; Baumgartner, Florian; Pollmann, Stefan

    2013-02-15

    Behavioral evidence suggests that the use of implicitly learned spatial contexts for improved visual search may depend on visual working memory resources. Working memory may be involved in contextual cueing in different ways: (1) for keeping implicitly learned working memory contents available during search or (2) for the capture of attention by contexts retrieved from memory. We mapped brain areas that were modulated by working memory capacity. Within these areas, activation was modulated by contextual cueing along the descending segment of the intraparietal sulcus, an area that has previously been related to maintenance of explicit memories. Increased activation for learned displays, but not modulated by the size of contextual cueing, was observed in the temporo-parietal junction area, previously associated with the capture of attention by explicitly retrieved memory items, and in the ventral visual cortex. This pattern of activation extends previous research on dorsal versus ventral stream functions in memory guidance of attention to the realm of attentional guidance by implicit memory. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang; Shen, ChaoHui

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  9. Regional brain response to visual food cues is a marker of satiety that predicts food choice.

    Science.gov (United States)

    Mehta, Sonya; Melhorn, Susan J; Smeraglio, Anne; Tyagi, Vidhi; Grabowski, Thomas; Schwartz, Michael W; Schur, Ellen A

    2012-11-01

    Neuronal processes that underlie the subjective experience of satiety after a meal are not well defined. We investigated how satiety alters the perception of and neural response to visual food cues. Normal-weight participants (10 men, 13 women) underwent 2 fMRI scans while viewing images of high-calorie food that was previously rated as incompatible with weight loss and "fattening" and low-calorie, "nonfattening" food. After a fasting fMRI scan, participants ate a standardized breakfast and underwent reimaging at a randomly assigned time 15-300 min after breakfast to vary the degree of satiety. Measures of subjective appetite, food appeal, and ad libitum food intake (measured after the second fMRI scan) were correlated with activation by "fattening" (compared with "nonfattening") food cues in a priori regions of interest. Greater hunger correlated with higher appeal ratings of "fattening" (r = 0.46, P = 0.03) but not "nonfattening" (r = -0.20, P = 0.37) foods. Fasting amygdalar activation was negatively associated with fullness (left: r = -0.52; right: r = -0.58; both P ≤ 0.01), whereas postbreakfast fullness was positively correlated with activation in the dorsal striatum (right: r = 0.44; left: r = 0.45; both P foods with higher fat content. Postmeal satiety is shown in regional brain activation by images of high-calorie foods. Regions including the amygdala, nucleus accumbens, and dorsal striatum may alter perception of, and reduce motivation to consume, energy-rich foods, ultimately driving food choice. This trial was registered at clinicaltrials.gov as NCT01631045.

  10. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  11. Prosody production networks are modulated by sensory cues and social context.

    Science.gov (United States)

    Klasen, Martin; von Marschall, Clara; Isman, Güldehen; Zvyagintsev, Mikhail; Gur, Ruben C; Mathiak, Klaus

    2018-03-05

    The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled fMRI during prosodic communication in 30 participants. Emotional vocalizations were a) free, b) auditorily cued, c) visually cued, or d) with interactive feedback. In addition to distributed language networks, cued emotions increased activity in auditory and - in case of visual stimuli - visual cortex. Responses were larger in pSTG at the right hemisphere and the ventral striatum when participants were listened to and received feedback from the experimenter. Sensory, language, and reward networks contributed to prosody production and were modulated by cues and social context. The right pSTG is a central hub for communication in social interactions - in particular for interpersonal evaluation of vocal emotions.

  12. Magnocellular Bias in Exogenous Attention to Biologically Salient Stimuli as Revealed by Manipulating Their Luminosity and Color.

    Science.gov (United States)

    Carretié, Luis; Kessel, Dominique; García-Rubio, María J; Giménez-Fernández, Tamara; Hoyos, Sandra; Hernández-Lorca, María

    2017-10-01

    Exogenous attention is a set of mechanisms that allow us to detect and reorient toward salient events-such as appetitive or aversive-that appear out of the current focus of attention. The nature of these mechanisms, particularly the involvement of the parvocellular and magnocellular visual processing systems, was explored. Thirty-four participants performed a demanding digit categorization task while salient (spiders or S) and neutral (wheels or W) stimuli were presented as distractors under two figure-ground formats: heterochromatic/isoluminant (exclusively processed by the parvocellular system, Par trials) and isochromatic/heteroluminant (preferentially processed by the magnocellular system, Mag trials). This resulted in four conditions: SPar, SMag, WPar, and WMag. Behavioral (RTs and error rates in the task) and electrophysiological (ERPs) indices of exogenous attention were analyzed. Behavior showed greater attentional capture by SMag than by SPar distractors and enhanced modulation of SMag capture as fear of spiders reported by participants increased. ERPs reflected a sequence from magnocellular dominant (P1p, ≃120 msec) to both magnocellular and parvocellular processing (N2p and P2a, ≃200 msec). Importantly, amplitudes in one N2p subcomponent were greater to SMag than to SPar and WMag distractors, indicating greater magnocellular sensitivity to saliency. Taking together, results support a magnocellular bias in exogenous attention toward distractors of any nature during initial processing, a bias that remains in later stages when biologically salient distractors are present.

  13. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue

    2006-01-01

    of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p......BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. HYPOTHESIS: In contrast we hypothesize...... that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features...

  14. Visual Sexual Stimuli-Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors.

    Science.gov (United States)

    Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  15. Visual Sexual Stimuli—Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors

    Science.gov (United States)

    Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS. PMID:27574507

  16. Cue-Induced Brain Activation in Chronic Ketamine-Dependent Subjects, Cigarette Smokers, and Healthy Controls: A Task Functional Magnetic Resonance Imaging Study

    Directory of Open Access Journals (Sweden)

    Yanhui Liao

    2018-03-01

    Full Text Available BackgroundObservations of drug-related cues may induce craving in drug-dependent patients, prompting compulsive drug-seeking behavior. Sexual dysfunction is common in drug users. The aim of the study was to examine regional brain activation to drug (ketamine, cigarette smoking associated cues and natural (sexual rewards.MethodsA sample of 129 [40 ketamine use smokers (KUS, 45 non-ketamine use smokers (NKUS and 44 non-ketamine use non-smoking healthy controls (HC] participants underwent functional magnetic resonance imaging (fMRI while viewing ketamine use related, smoking and sexual films.ResultsWe found that KUS showed significant increased activation in anterior cingulate cortex and precuneus in response to ketamine cues. Ketamine users (KUS showed lower activation in cerebellum and middle temporal cortex compared with non-ketamine users (NKUS and HC in response to sexual cues. Smokers (KUS and NKUS showed higher activation in the right precentral frontal cortex in response to smoking cues. Non-ketamine users (NKUS and HC showed significantly increased activation of cerebellum and middle temporal cortex while viewing sexual cues.ConclusionThese findings clearly show the engagement of distinct neural circuitry for drug-related stimuli in chronic ketamine users. While smokers (both KUS and NKUS showed overlapping differences in activation for smoking cues, the former group showed a specific neural response to relevant (i.e., ketamine-related cues. In particular, the heightened response in anterior cingulate cortex may have important implications for how attentionally salient such cues are in this group. Ketamine users (KUS showed lower activation in response to sexual cues may partly reflect the neural basis of sexual dysfunction.

  17. Contextual cueing based on the semantic-category membership of the environment

    OpenAIRE

    GOUJON, A

    2005-01-01

    During the analysis of a visual scene, top-down processing is constantly directing the subject's attention to the zones of interest in the scene. The contextual cueing paradigm developed by Chun and Jiang (1998) shows how contextual regularities can facilitate the search for a particular element via implicit learning mechanisms. In the proposed study, contextual cueing task with lexical displays was used. The semantic-category membership of the contextual words predicted the location of the t...

  18. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    Science.gov (United States)

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  19. Action Planning Mediates Guidance of Visual Attention from Working Memory

    Directory of Open Access Journals (Sweden)

    Tobias Feldmann-Wüstefeld

    2015-01-01

    Full Text Available Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles, thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.

  20. Empathy, Pain and Attention: Cues that Predict Pain Stimulation to the Partner and the Self Capture Visual Attention

    Directory of Open Access Journals (Sweden)

    Lingdan Wu

    2017-09-01

    Full Text Available Empathy motivates helping and cooperative behaviors and plays an important role in social interactions and personal communication. The present research examined the hypothesis that a state of empathy guides attention towards stimuli significant to others in a similar way as to stimuli relevant to the self. Sixteen couples in romantic partnerships were examined in a pain-related empathy paradigm including an anticipation phase and a stimulation phase. Abstract visual symbols (i.e., arrows and flashes signaled the delivery of a Pain or Nopain stimulus to the partner or the self while dense sensor event-related potentials (ERPs were simultaneously recorded from both persons. During the anticipation phase, stimuli predicting Pain compared to Nopain stimuli to the partner elicited a larger early posterior negativity (EPN and late positive potential (LPP, which were similar in topography and latency to the EPN and LPP modulations elicited by stimuli signaling pain for the self. Noteworthy, using abstract cue symbols to cue Pain and Nopain stimuli suggests that these effects are not driven by perceptual features. The findings demonstrate that symbolic stimuli relevant for the partner capture attention, which implies a state of empathy to the pain of the partner. From a broader perspective, states of empathy appear to regulate attention processing according to the perceived needs and goals of the partner.

  1. Integration of Distinct Objects in Visual Working Memory Depends on Strong Objecthood Cues Even for Different-Dimension Conjunctions.

    Science.gov (United States)

    Balaban, Halely; Luria, Roy

    2016-05-01

    What makes an integrated object in visual working memory (WM)? Past evidence suggested that WM holds all features of multidimensional objects together, but struggles to integrate color-color conjunctions. This difficulty was previously attributed to a challenge in same-dimension integration, but here we argue that it arises from the integration of 2 distinct objects. To test this, we examined the integration of distinct different-dimension features (a colored square and a tilted bar). We monitored the contralateral delay activity, an event-related potential component sensitive to the number of objects in WM. The results indicated that color and orientation belonging to distinct objects in a shared location were not integrated in WM (Experiment 1), even following a common fate Gestalt cue (Experiment 2). These conjunctions were better integrated in a less demanding task (Experiment 3), and in the original WM task, but with a less individuating version of the original stimuli (Experiment 4). Our results identify the critical factor in WM integration at same- versus separate-objects, rather than at same- versus different-dimensions. Compared with the perfect integration of an object's features, the integration of several objects is demanding, and depends on an interaction between the grouping cues and task demands, among other factors. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Target-nontarget similarity decreases search efficiency and increases stimulus-driven control in visual search.

    Science.gov (United States)

    Barras, Caroline; Kerzel, Dirk

    2017-10-01

    Some points of criticism against the idea that attentional selection is controlled by bottom-up processing were dispelled by the attentional window account. The attentional window account claims that saliency computations during visual search are only performed for stimuli inside the attentional window. Therefore, a small attentional window may avoid attentional capture by salient distractors because it is likely that the salient distractor is located outside the window. In contrast, a large attentional window increases the chances of attentional capture by a salient distractor. Large and small attentional windows have been associated with efficient (parallel) and inefficient (serial) search, respectively. We compared the effect of a salient color singleton on visual search for a shape singleton during efficient and inefficient search. To vary search efficiency, the nontarget shapes were either similar or dissimilar with respect to the shape singleton. We found that interference from the color singleton was larger with inefficient than efficient search, which contradicts the attentional window account. While inconsistent with the attentional window account, our results are predicted by computational models of visual search. Because of target-nontarget similarity, the target was less salient with inefficient than efficient search. Consequently, the relative saliency of the color distractor was higher with inefficient than with efficient search. Accordingly, stronger attentional capture resulted. Overall, the present results show that bottom-up control by stimulus saliency is stronger when search is difficult, which is inconsistent with the attentional window account.

  3. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    Science.gov (United States)

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  4. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention

    Directory of Open Access Journals (Sweden)

    Saki Takao

    2018-01-01

    Full Text Available The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  5. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention.

    Science.gov (United States)

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2017-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  6. Slushy weightings for the optimal pilot model. [considering visual tracking task

    Science.gov (United States)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  7. Late development of cue integration is linked to sensory fusion in cortex.

    Science.gov (United States)

    Dekker, Tessa M; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I; Welchman, Andrew E; Nardini, Marko

    2015-11-02

    Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3-5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7-9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6-12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3-5]. By contrast, in younger children (develop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Low-level visual attention and its relation to joint attention in autism spectrum disorder.

    Science.gov (United States)

    Jaworski, Jessica L Bean; Eigsti, Inge-Marie

    2017-04-01

    Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.

  9. Working memory biasing of visual perception without awareness.

    Science.gov (United States)

    Pan, Yi; Lin, Bingyuan; Zhao, Yajun; Soto, David

    2014-10-01

    Previous research has demonstrated that the contents of visual working memory can bias visual processing in favor of matching stimuli in the scene. However, the extent to which such top-down, memory-driven biasing of visual perception is contingent on conscious awareness remains unknown. Here we showed that conscious awareness of critical visual cues is dispensable for working memory to bias perceptual selection mechanisms. Using the procedure of continuous flash suppression, we demonstrated that "unseen" visual stimuli during interocular suppression can gain preferential access to awareness if they match the contents of visual working memory. Strikingly, the very same effect occurred even when the visual cue to be held in memory was rendered nonconscious by masking. Control experiments ruled out the alternative accounts of repetition priming and different detection criteria. We conclude that working memory biases of visual perception can operate in the absence of conscious awareness.

  10. Attentional Bias for Uncertain Cues of Shock in Human Fear Conditioning: Evidence for Attentional Learning Theory

    Directory of Open Access Journals (Sweden)

    Stephan Koenig

    2017-05-01

    Full Text Available We conducted a human fear conditioning experiment in which three different color cues were followed by an aversive electric shock on 0, 50, and 100% of the trials, and thus induced low (L, partial (P, and high (H shock expectancy, respectively. The cues differed with respect to the strength of their shock association (L < P < H and the uncertainty of their prediction (L < P > H. During conditioning we measured pupil dilation and ocular fixations to index differences in the attentional processing of the cues. After conditioning, the shock-associated colors were introduced as irrelevant distracters during visual search for a shape target while shocks were no longer administered and we analyzed the cues’ potential to capture and hold overt attention automatically. Our findings suggest that fear conditioning creates an automatic attention bias for the conditioned cues that depends on their correlation with the aversive outcome. This bias was exclusively linked to the strength of the cues’ shock association for the early attentional processing of cues in the visual periphery, but additionally was influenced by the uncertainty of the shock prediction after participants fixated on the cues. These findings are in accord with attentional learning theories that formalize how associative learning shapes automatic attention.

  11. Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations

    Science.gov (United States)

    Mirloo, Mahsa; Ebrahimnezhad, Hosein

    2018-03-01

    In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.

  12. Perceptual Training in Beach Volleyball Defence: Different Effects of Gaze-Path Cueing on Gaze and Decision-Making

    Directory of Open Access Journals (Sweden)

    André eKlostermann

    2015-12-01

    Full Text Available For perceptual-cognitive skill training, a variety of intervention methods has been proposed, including the so-called colour-cueing method which aims on superior gaze-path learning by applying visual markers. However, recent findings challenge this method, especially, with regards to its actual effects on gaze behaviour. Consequently, after a preparatory study on the identification of appropriate visual cues for life-size displays, a perceptual-training experiment on decision-making in beach volleyball was conducted, contrasting two cueing interventions (functional vs. dysfunctional gaze path with a conservative control condition (anticipation-related instructions. Gaze analyses revealed learning effects for the dysfunctional group only. Regarding decision-making, all groups showed enhanced performance with largest improvements for the control group followed by the functional and the dysfunctional group. Hence, the results confirm cueing effects on gaze behaviour, but they also question its benefit for enhancing decision-making. However, before completely denying the method’s value, optimisations should be checked regarding, for instance, cueing-pattern characteristics and gaze-related feedback.

  13. The ability of left- and right-hemisphere damaged individuals to produce prosodic cues to disambiguate Korean idiomatic sentences

    Directory of Open Access Journals (Sweden)

    Seung-Yun Yang

    2014-05-01

    Three speech language pathologists with training in phonetics participated as raters for vocal qualities. Nasality was significantly salient vocal quality of idiomatic utterances. Conclusion The findings support that (1 LHD negatively affected the production of durational cues and RHD negatively affected the production of fundamental frequency cues in idiomatic-literal contrasts; (2 healthy listeners successfully identified idiomatic and literal versions of ambiguous sentences produced by healthy speakers but not by RHD speakers; (3 Productions in brain-damaged participants approximated HC’s measures in the repetition tasks, but not in the elicitation tasks; (4 Nasal voice quality was judged to be associated with idiomatic utterances in all groups of participants. Findings agree with previous studies indicating HC’s abilities to discriminate literal versus idiomatic meanings in ditropically ambiguous idioms, as well as deficient processing of pitch production and impaired pragmatic ability in RHD.

  14. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  15. Effects of teacher-directed versus student-directed instruction and cues versus no cues for improving spelling performance

    OpenAIRE

    Gettinger, Maribeth

    1985-01-01

    The purpose of this study was twofold: to examine the effects of imitating children's spelling errors alone and in combination with visual and verbal cues on spelling accuracy and retention among poor spellers and to compare the effectiveness of student-directed versus teacher-directed spelling instruction on children's spelling accuracy and retention. Nine children received four alternating experimental treatments during a 16-week spelling program. Results indicated that student-directed ins...

  16. I can see what you are saying: Auditory labels reduce visual search times.

    Science.gov (United States)

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Visual and auditory reaction time for air traffic controllers using quantitative electroencephalograph (QEEG) data.

    Science.gov (United States)

    Abbass, Hussein A; Tang, Jiangjun; Ellejmi, Mohamed; Kirby, Stephen

    2014-12-01

    The use of quantitative electroencephalograph in the analysis of air traffic controllers' performance can reveal with a high temporal resolution those mental responses associated with different task demands. To understand the relationship between visual and auditory correct responses, reaction time, and the corresponding brain areas and functions, air traffic controllers were given an integrated visual and auditory continuous reaction task. Strong correlations were found between correct responses to the visual target and the theta band in the frontal lobe, the total power in the medial of the parietal lobe and the theta-to-beta ratio in the left side of the occipital lobe. Incorrect visual responses triggered activations in additional bands including the alpha band in the medial of the frontal and parietal lobes, and the Sensorimotor Rhythm in the medial of the parietal lobe. Controllers' responses to visual cues were found to be more accurate but slower than their corresponding performance on auditory cues. These results suggest that controllers are more susceptible to overload when more visual cues are used in the air traffic control system, and more errors are pruned as more auditory cues are used. Therefore, workload studies should be carried out to assess the usefulness of additional cues and their interactions with the air traffic control environment.

  18. Orientation is different: Interaction between contour integration and feature contrasts in visual search.

    Science.gov (United States)

    Jingling, Li; Tseng, Chia-Huei; Zhaoping, Li

    2013-09-10

    Salient items usually capture attention and are beneficial to visual search. Jingling and Tseng (2013), nevertheless, have discovered that a salient collinear column can impair local visual search. The display used in that study had 21 rows and 27 columns of bars, all uniformly horizontal (or vertical) except for one column of bars orthogonally oriented to all other bars, making this unique column of collinear (or noncollinear) bars salient in the display. Observers discriminated an oblique target bar superimposed on one of the bars either in the salient column or in the background. Interestingly, responses were slower for a target in a salient collinear column than in the background. This opens a theoretical question of how contour integration interacts with salience computation, which is addressed here by an examination of how salience modulated the search impairment from the collinear column. We show that the collinear column needs to have a high orientation contrast with its neighbors to exert search interference. A collinear column of high contrast in color or luminance did not produce the same impairment. Our results show that orientation-defined salience interacted with collinear contour differently from other feature dimensions, which is consistent with the neuronal properties in V1.

  19. Orienting attention within visual short-term memory: development and mechanisms.

    Science.gov (United States)

    Shimi, Andria; Nobre, Anna C; Astle, Duncan; Scerif, Gaia

    2014-01-01

    How does developing attentional control operate within visual short-term memory (VSTM)? Seven-year-olds, 11-year-olds, and adults (total n = 205) were asked to report whether probe items were part of preceding visual arrays. In Experiment 1, central or peripheral cues oriented attention to the location of to-be-probed items either prior to encoding or during maintenance. Cues improved memory regardless of their position, but younger children benefited less from cues presented during maintenance, and these benefits related to VSTM span over and above basic memory in uncued trials. In Experiment 2, cues of low validity eliminated benefits, suggesting that even the youngest children use cues voluntarily, rather than automatically. These findings elucidate the close coupling between developing visuospatial attentional control and VSTM. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  20. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    Science.gov (United States)

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  1. Examining the durability of incidentally learned trust from gaze cues.

    Science.gov (United States)

    Strachan, James W A; Tipper, Steven P

    2017-10-01

    In everyday interactions we find our attention follows the eye gaze of faces around us. As this cueing is so powerful and difficult to inhibit, gaze can therefore be used to facilitate or disrupt visual processing of the environment, and when we experience this we infer information about the trustworthiness of the cueing face. However, to date no studies have investigated how long these impressions last. To explore this we used a gaze-cueing paradigm where faces consistently demonstrated either valid or invalid cueing behaviours. Previous experiments show that valid faces are subsequently rated as more trustworthy than invalid faces. We replicate this effect (Experiment 1) and then include a brief interference task in Experiment 2 between gaze cueing and trustworthiness rating, which weakens but does not completely eliminate the effect. In Experiment 3, we explore whether greater familiarity with the faces improves the durability of trust learning and find that the effect is more resilient with familiar faces. Finally, in Experiment 4, we push this further and show that evidence of trust learning can be seen up to an hour after cueing has ended. Taken together, our results suggest that incidentally learned trust can be durable, especially for faces that deceive.

  2. Pigeons Exhibit Contextual Cueing to Both Simple and Complex Backgrounds

    Science.gov (United States)

    Wasserman, Edward A.; Teng, Yuejia; Castro, Leyre

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of this contextual cueing effect using a novel Cueing-Miscueing design. Pigeons had to peck a target which could appear in one of four possible locations on four possible color backgrounds or four possible color photographs of real-world scenes. On 80% of the trials, each of the contexts was uniquely paired with one of the target locations; on the other 20% of the trials, each of the contexts was randomly paired with the remaining target locations. Pigeons came to exhibit robust contextual cueing when the context preceded the target by 2 s, with reaction times to the target being shorter on correctly-cued trials than on incorrectly-cued trials. Contextual cueing proved to be more robust with photographic backgrounds than with uniformly colored backgrounds. In addition, during the context-target delay, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. These findings confirm the effectiveness of animal models of contextual cueing and underscore the important part played by associative learning in producing the effect. PMID:24491468

  3. Top-down and bottom-up aspects of active search in a real-world environment.

    Science.gov (United States)

    Foulsham, Tom; Chapman, Craig; Nasiopoulos, Eleni; Kingstone, Alan

    2014-03-01

    Visual search has been studied intensively in the labouratory, but lab search often differs from search in the real world in many respects. Here, we used a mobile eye tracker to record the gaze of participants engaged in a realistic, active search task. Participants were asked to walk into a mailroom and locate a target mailbox among many similar mailboxes. This procedure allowed control of bottom-up cues (by making the target mailbox more salient; Experiment 1) and top-down instructions (by informing participants about the cue; Experiment 2). The bottom-up salience of the target had no effect on the overall time taken to search for the target, although the salient target was more likely to be fixated and found once it was within the central visual field. Top-down knowledge of target appearance had a larger effect, reducing the need for multiple head and body movements, and meaning that the target was fixated earlier and from further away. Although there remains much to be discovered in complex real-world search, this study demonstrates that principles from visual search in the labouratory influence gaze in natural behaviour, and provides a bridge between these labouratory studies and research examining vision in natural tasks.

  4. Cue reactivity towards shopping cues in female participants.

    Science.gov (United States)

    Starcke, Katrin; Schlereth, Berenike; Domass, Debora; Schöler, Tobias; Brand, Matthias

    2013-03-01

    Background and aims It is currently under debate whether pathological buying can be considered as a behavioural addiction. Addictions have often been investigated with cue-reactivity paradigms to assess subjective, physiological and neural craving reactions. The current study aims at testing whether cue reactivity towards shopping cues is related to pathological buying tendencies. Methods A sample of 66 non-clinical female participants rated shopping related pictures concerning valence, arousal, and subjective craving. In a subgroup of 26 participants, electrodermal reactions towards those pictures were additionally assessed. Furthermore, all participants were screened concerning pathological buying tendencies and baseline craving for shopping. Results Results indicate a relationship between the subjective ratings of the shopping cues and pathological buying tendencies, even if baseline craving for shopping was controlled for. Electrodermal reactions were partly related to the subjective ratings of the cues. Conclusions Cue reactivity may be a potential correlate of pathological buying tendencies. Thus, pathological buying may be accompanied by craving reactions towards shopping cues. Results support the assumption that pathological buying can be considered as a behavioural addiction. From a methodological point of view, results support the view that the cue-reactivity paradigm is suited for the investigation of craving reactions in pathological buying and future studies should implement this paradigm in clinical samples.

  5. Optimization of Visual Information Presentation for Visual Prosthesis

    Directory of Open Access Journals (Sweden)

    Fei Guo

    2018-01-01

    Full Text Available Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.

  6. Optimization of Visual Information Presentation for Visual Prosthesis

    Science.gov (United States)

    Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769

  7. Uncovering Dangerous Cheats: How Do Avian Hosts Recognize Adult Brood Parasites?

    Science.gov (United States)

    Trnka, Alfréd; Prokop, Pavol; Grim, Tomáš

    2012-01-01

    Background Co-evolutionary struggles between dangerous enemies (e.g., brood parasites) and their victims (hosts) lead to the emergence of sophisticated adaptations and counter-adaptations. Salient host tricks to reduce parasitism costs include, as front line defence, adult enemy discrimination. In contrast to the well studied egg stage, investigations addressing the specific cues for adult enemy recognition are rare. Previous studies have suggested barred underparts and yellow eyes may provide cues for the recognition of cuckoos Cuculus canorus by their hosts; however, no study to date has examined the role of the two cues simultaneously under a consistent experimental paradigm. Methodology/Principal Findings We modify and extend previous work using a novel experimental approach – custom-made dummies with various combinations of hypothesized recognition cues. The salient recognition cue turned out to be the yellow eye. Barred underparts, the only trait examined previously, had a statistically significant but small effect on host aggression highlighting the importance of effect size vs. statistical significance. Conclusion Relative importance of eye vs. underpart phenotypes may reflect ecological context of host-parasite interaction: yellow eyes are conspicuous from the typical direction of host arrival (from above), whereas barred underparts are poorly visible (being visually blocked by the upper part of the cuckoo's body). This visual constraint may reduce usefulness of barred underparts as a reliable recognition cue under a typical situation near host nests. We propose a novel hypothesis that recognition cues for enemy detection can vary in a context-dependent manner (e.g., depending on whether the enemy is approached from below or from above). Further we suggest a particular cue can trigger fear reactions (escape) in some hosts/populations whereas the same cue can trigger aggression (attack) in other hosts/populations depending on presence/absence of dangerous

  8. Effects of mosquitofish (Gambusia affinis cues on wood frog (Lithobates sylvaticus tadpole activity

    Directory of Open Access Journals (Sweden)

    Katherine F. Buttermore

    2011-06-01

    Full Text Available We examined the changes in activity of wood frog (Lithobates sylvaticus tadpoles exposed to combinations of visual, chemical, and mechanical cues of the invasive mosquitofish (Gambusia affinis. We also examined whether the responses of the tadpoles to the predator cues were influenced by the short-term accumulation of chemical cues in the experimental container. In our experiment, the activity of wood frog (L. sylvaticus tadpoles was not affected by the presence of various cues from mosquitofish. Our experiment demonstrated that the repeated use of trial water can influence the activity level of tadpoles, regardless of the predator cue treatment used. Tadpoles in the first trial tended to be less active than tadpoles in subsequent trials. This effect does not appear to be mediated by the accumulation of predator cues since there was no significant interaction term. Our results suggest that short-term accumulation of predator chemical cues do not affect the behavior of wood frog tadpoles: however, our results suggest that the repeated use of the same water in consecutive trials may affect tadpole behavior, perhaps through the accumulation of conspecific chemical cues.

  9. The effects of motion and g-seat cues on pilot simulator performance of three piloting tasks

    Science.gov (United States)

    Showalter, T. W.; Parris, B. L.

    1980-01-01

    Data are presented that show the effects of motion system cues, g-seat cues, and pilot experience on pilot performance during takeoffs with engine failures, during in-flight precision turns, and during landings with wind shear. Eight groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The basic cueing system was a fixed-base type (no-motion cueing) with visual cueing. The other three systems were produced by the presence of either a motion system or a g-seat, or both. Extensive statistical analysis of the data was performed and representative performance means were examined. These data show that the addition of motion system cueing results in significant improvement in pilot performance for all three tasks; however, the use of g-seat cueing, either alone or in conjunction with the motion system, provides little if any performance improvement for these tasks and for this aircraft type.

  10. The impact of napping on memory for future-relevant stimuli: Prioritization among multiple salience cues.

    Science.gov (United States)

    Bennion, Kelly A; Payne, Jessica D; Kensinger, Elizabeth A

    2016-06-01

    Prior research has demonstrated that sleep enhances memory for future-relevant information, including memory for information that is salient due to emotion, reward, or knowledge of a later memory test. Although sleep has been shown to prioritize information with any of these characteristics, the present study investigates the novel question of how sleep prioritizes information when multiple salience cues exist. Participants encoded scenes that were future-relevant based on emotion (emotional vs. neutral), reward (rewarded vs. unrewarded), and instructed learning (intentionally vs. incidentally encoded), preceding a delay consisting of a nap, an equivalent time period spent awake, or a nap followed by wakefulness (to control for effects of interference). Recognition testing revealed that when multiple dimensions of future relevance co-occur, sleep prioritizes top-down, goal-directed cues (instructed learning, and to a lesser degree, reward) over bottom-up, stimulus-driven characteristics (emotion). Further, results showed that these factors interact; the effect of a nap on intentionally encoded information was especially strong for neutral (relative to emotional) information, suggesting that once one cue for future relevance is present, there are diminishing returns with additional cues. Sleep may binarize information based on whether it is future-relevant or not, preferentially consolidating memory for the former category. Potential neural mechanisms underlying these selective effects and the implications of this research for educational and vocational domains are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Time-resolved neuroimaging of visual short term memory consolidation by post-perceptual attention shifts.

    Science.gov (United States)

    Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan

    2016-01-15

    Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Effects of visual and chemical cues on orientation behavior of the Red Sea hermit crab Clibanarius signatus

    Directory of Open Access Journals (Sweden)

    Tarek Gad El-Kareem Ismail

    2012-03-01

    Full Text Available Directional orientation of Clibanarius signatus toward different targets of gastropod shells was studied in a circular arena upon exposure to background seawater, calcium concentrations and predatory odor. Directional orientation was absent when crabs were presented with the white background alone. Each shell was tested in different positions (e.g., anterior, posterior, upside-down, lateral. Adult crabs were tested without their gastropod shells, and orientation varied with concentration and chemical cue. With calcium, orientation increased as concentration increased up to a maximum attraction percentage and then attraction became stable. In the case of predator cues, some individuals swim away from the target toward the opposite direction representing a predator avoidance response. Whenever, the blind hermit crab C. signatus was exposed to a shell target combined with calcium or predator cues, the majority of them stop moving or move in circles around the arena center. The others exhibited uniform orientation distribution. The responsiveness was higher with calcium cues than predator cues. Thus in the absence of vision, individual hermit crabs were able to detect both calcium and predator cues and have different response regarding them.

  13. Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Rönnberg, Jerker

    2017-09-18

    We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

  14. Functional interplay of top-down attention with affective codes during visual short-term memory maintenance.

    Science.gov (United States)

    Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu

    2018-06-01

    Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.

    Science.gov (United States)

    Persuh, Marjan; Melara, Robert D

    2016-01-01

    In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  16. Beyond magic traits: Multimodal mating cues in Heliconius butterflies.

    Science.gov (United States)

    Mérot, Claire; Frérot, Brigitte; Leppik, Ene; Joron, Mathieu

    2015-11-01

    Species coexistence involves the evolution of reproductive barriers opposing gene flow. Heliconius butterflies display colorful patterns affecting mate choice and survival through warning signaling and mimicry. These patterns are called "magic traits" for speciation because divergent natural selection may promote mimicry shifts in pattern whose role as mating cue facilitates reproductive isolation. By contrast, between comimetic species, natural selection promotes pattern convergence. We addressed whether visual convergence interferes with reproductive isolation by testing for sexual isolation between two closely related species with similar patterns, H. timareta thelxinoe and H. melpomene amaryllis. Experiments with models confirmed visual attraction based on wing phenotype, leading to indiscriminate approach. Nevertheless, mate choice experiments showed assortative mating. Monitoring male behavior toward live females revealed asymmetry in male preference, H. melpomene males courting both species equally while H. timareta males strongly preferred conspecifics. Experiments with hybrid males suggested an important genetic component for such asymmetry. Behavioral observations support a key role for short-distance cues in determining male choice in H. timareta. Scents extracts from wings and genitalia revealed interspecific divergence in chemical signatures, and hybrid female scent composition was significantly associated with courtship intensity by H. timareta males, providing candidate chemical mating cues involved in sexual isolation. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  17. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.

    Science.gov (United States)

    Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J

    2013-01-01

    Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).

  18. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  19. Sound arithmetic: auditory cues in the rehabilitation of impaired fact retrieval.

    Science.gov (United States)

    Domahs, Frank; Zamarian, Laura; Delazer, Margarete

    2008-04-01

    The present single case study describes the rehabilitation of an acquired impairment of multiplication fact retrieval. In addition to a conventional drill approach, one set of problems was preceded by auditory cues while the other half was not. After extensive repetition, non-specific improvements could be observed for all trained problems (e.g., 3 * 7) as well as for their non-trained complementary problems (e.g., 7 * 3). Beyond this general improvement, specific therapy effects were found for problems trained with auditory cues. These specific effects were attributed to an involvement of implicit memory systems and/or attentional processes during training. Thus, the present results demonstrate that cues in the training of arithmetic facts do not have to be visual to be effective.

  20. Two (or three) is one too many : testing the flexibility of contextual cueing with multiple target locations

    OpenAIRE

    Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J.

    2011-01-01

    Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whe...

  1. Acquisition of Conditioning between Methamphetamine and Cues in Healthy Humans.

    Directory of Open Access Journals (Sweden)

    Joel S Cavallo

    Full Text Available Environmental stimuli repeatedly paired with drugs of abuse can elicit conditioned responses that are thought to promote future drug seeking. We recently showed that healthy volunteers acquired conditioned responses to auditory and visual stimuli after just two pairings with methamphetamine (MA, 20 mg, oral. This study extended these findings by systematically varying the number of drug-stimuli pairings. We expected that more pairings would result in stronger conditioning. Three groups of healthy adults were randomly assigned to receive 1, 2 or 4 pairings (Groups P1, P2 and P4, Ns = 13, 16, 16, respectively of an auditory-visual stimulus with MA, and another stimulus with placebo (PBO. Drug-cue pairings were administered in an alternating, counterbalanced order, under double-blind conditions, during 4 hr sessions. MA produced prototypic subjective effects (mood, ratings of drug effects and alterations in physiology (heart rate, blood pressure. Although subjects did not exhibit increased behavioral preference for, or emotional reactivity to, the MA-paired cue after conditioning, they did exhibit an increase in attentional bias (initial gaze toward the drug-paired stimulus. Further, subjects who had four pairings reported "liking" the MA-paired cue more than the PBO cue after conditioning. Thus, the number of drug-stimulus pairings, varying from one to four, had only modest effects on the strength of conditioned responses. Further studies investigating the parameters under which drug conditioning occurs will help to identify risk factors for developing drug abuse, and provide new treatment strategies.

  2. The relationship between two visual communication systems: reading and lipreading.

    Science.gov (United States)

    Williams, A

    1982-12-01

    To explore the relationship between reading and lipreading and to determine whether readers and lipreaders use similar strategies to comprehend verbal messages, 60 female junior and sophomore high school students--30 good and 30 poor readers--were given a filmed lipreading test, a test to measure eye-voice span, a test of cloze ability, and a test of their ability to comprehend printed material presented one word at a time in the absence of an opportunity to regress or scan ahead. The results of this study indicated that (a) there is a significant relationship between reading and lipreading ability; (b) although good readers may be either good or poor lipreaders, poor readers are more likely to be poor than good lipreaders; (c) there are similarities in the strategies used by readers and lipreaders in their approach to comprehending spoken and written material; (d) word-by-word reading of continuous prose appears to be a salient characteristic of both poor reading and poor lipreading ability; and (c) good readers and lipreaders do not engage in word-by-word reading but rather use a combination of visual and linguistic cues to interpret written and spoken messages.

  3. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson's Disease

    NARCIS (Netherlands)

    Janssen, S.; Bolte, B.; Nonnekes, J.H.; Bittner, M.; Bloem, B.R.; Heida, T.; Zhao, Y; Wezel, R.J.A. van

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality

  4. Perception and psychological evaluation for visual and auditory environment based on the correlation mechanisms

    Science.gov (United States)

    Fujii, Kenji

    2002-06-01

    In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.

  5. Interaction of chemical cues from fish tissues and organophosphorous pesticides on Ceriodaphnia dubia survival

    International Nuclear Information System (INIS)

    Maul, Jonathan D.; Farris, Jerry L.; Lydy, Michael J.

    2006-01-01

    Cladocera are frequently used as test organisms for assessing chemical and effluent toxicity and have been shown to respond to stimuli and cues from potential predators. In this study, the interactive effects of visual and chemical cues of fish and two organophosphorous pesticides on survival of Ceriodaphnia dubia were examined. A significant chemical cue (homogenized Pimephales promelas) and malathion interaction was observed on C. dubia survival (P = 0.006). Chemical cue and 2.82 μg/L malathion resulted in a 76.0% reduction in survival compared to malathion alone (P < 0.01). Furthermore, potentiation of malathion toxicity varied based on the source of chemical cues (i.e., epithelial or whole body). It is unclear in this study whether these chemical cues elicited a predation-related stress in C. dubia. Future research should examine the mechanism of this interaction and determine what role, if any, stress responses by C. dubia might play in the interaction. - Potentiation of organophosphorous pesticide toxicity to Ceriodaphnia dubia by fathead minnow (Pimephales promelas) chemical cues was observed

  6. Contextual cueing improves attentional guidance, even when guidance is supposedly optimal.

    Science.gov (United States)

    Harris, Anthony M; Remington, Roger W

    2017-05-01

    Visual search through previously encountered contexts typically produces reduced reaction times compared with search through novel contexts. This contextual cueing benefit is well established, but there is debate regarding its underlying mechanisms. Eye-tracking studies have consistently shown reduced number of fixations with repetition, supporting improvements in attentional guidance as the source of contextual cueing. However, contextual cueing benefits have been shown in conditions in which attentional guidance should already be optimal-namely, when attention is captured to the target location by an abrupt onset, or under pop-out conditions. These results have been used to argue for a response-related account of contextual cueing. Here, we combine eye tracking with response time to examine the mechanisms behind contextual cueing in spatially cued and pop-out conditions. Three experiments find consistent response time benefits with repetition, which appear to be driven almost entirely by a reduction in number of fixations, supporting improved attentional guidance as the mechanism behind contextual cueing. No differences were observed in the time between fixating the target and responding-our proxy for response related processes. Furthermore, the correlation between contextual cueing magnitude and the reduction in number of fixations on repeated contexts approaches 1. These results argue strongly that attentional guidance is facilitated by familiar search contexts, even when guidance is near-optimal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. How visual short-term memory maintenance modulates subsequent visual aftereffects.

    Science.gov (United States)

    Saad, Elyana; Silvanto, Juha

    2013-05-01

    Prolonged viewing of a visual stimulus can result in sensory adaptation, giving rise to perceptual phenomena such as the tilt aftereffect (TAE). However, it is not known if short-term memory maintenance induces such effects. We examined how visual short-term memory (VSTM) maintenance modulates the strength of the TAE induced by subsequent visual adaptation. We reasoned that if VSTM maintenance induces aftereffects on subsequent encoding of visual information, then it should either enhance or reduce the TAE induced by a subsequent visual adapter, depending on the congruency of the memory cue and the adapter. Our results were consistent with this hypothesis and thus indicate that the effects of VSTM maintenance can outlast the maintenance period.

  8. Use of olfactory cues by newly metamorphosed wood frogs (Lithobates sylvaticus) during emigration

    Science.gov (United States)

    Zydlewski, Joseph D.; Popescu, Viorel D.; Brodie, Bekka S.; Hunter, Malcom L.

    2012-01-01

    Juvenile amphibians are capable of long-distance upland movements, yet cues used for orientation during upland movements are poorly understood. We used newly metamorphosed Wood Frogs (Lithobates sylvaticus) to investigate: (1) the existence of innate (i.e., inherited) directionality, and (2) the use of olfactory cues, specifically forested wetland and natal pond cues during emigration. In a circular arena experiment, animals with assumed innate directionality did not orient in the expected direction (suggested by previous studies) when deprived of visual and olfactory cues. This suggests that juvenile Wood Frogs most likely rely on proximate cues for orientation. Animals reared in semi-natural conditions (1500 l cattle tanks) showed a strong avoidance of forested wetland cues in two different experimental settings, although they had not been previously exposed to such cues. This finding is contrary to known habitat use by adult Wood Frogs during summer. Juvenile Wood Frogs were indifferent to the chemical signature of natal pond (cattle tank) water. Our findings suggest that management strategies for forest amphibians should consider key habitat features that potentially influence the orientation of juveniles during emigration movements, as well as adult behavior.

  9. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory.

    Science.gov (United States)

    Nikouei Mahani, Mohammad-Ali; Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.

  10. Visual cues for manual control of headway

    Directory of Open Access Journals (Sweden)

    Simon eHosking

    2013-05-01

    Full Text Available The ability to maintain appropriate gaps to objects in one's environment is important when navigating through a three-dimensional world. Previous research has shown that the visual angle subtended by a lead/approaching object and its rate of change are important variables for timing interceptions, collision avoidance, continuous regulation of braking, and manual control of headway. However, investigations of headway maintenance have required participants to maintain a fixed following distance and have notinvestigated how information about speed is taken into account. In the following experiment, we asked participants to use a joystick to follow computer-simulated lead objects. The results showed that ground texture, following speed, and the size of the lead object had significant effects on both mean following distances and following distance variance. Furthermore, models of the participants' joystick responses provided better fits when it was assumed that the desired visual extent of the lead object would vary over time. Taken together, the results indicate that while information about own-speed is used by controllers to set the desired headway to a lead object, the continuous regulation of headway is influenced primarily by the visual angle of the lead object and its rate of change. The reliance on visual angle, its rate of change, and/or own-speed information also varied depending on the controldynamics of the system. Such findings are consistent with an optimal control criterion that reflects a differential weighting on different sources of information depending on the plant dynamics. As in other judgements of motion in depth, the information used for controlling headway to other objects in the environment varies depending on the constraints of the task and different strategies of control.

  11. The Use of Cues in Multimedia Instructions in Technology as a Way to Reduce Cognitive Load

    Science.gov (United States)

    Roberts, William

    2017-01-01

    This study was designed to address cognitive overload issues through the use of visual cueing as a means to enhance learning. While there has been significant research such as use of color for cueing to address many of the cited problems, there are missing elements in this research that could go a long way toward designing more effective solutions…

  12. Heightened attentional capture by visual food stimuli in anorexia nervosa.

    Science.gov (United States)

    Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J

    2017-08-01

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Global cue inconsistency diminishes learning of cue validity

    Directory of Open Access Journals (Sweden)

    Tony Wang

    2016-11-01

    Full Text Available We present a novel two-stage probabilistic learning task that examines the participants’ ability to learn and utilize valid cues across several levels of probabilistic feedback. In the first stage, participants sample from one of three cues that gives predictive information about the outcome of the second stage. Participants are rewarded for correct prediction of the outcome in stage two. Only one of the three cues gives valid predictive information and thus participants can maximise their reward by learning to sample from the valid cue. The validity of this predictive information, however, is reinforced across several levels of probabilistic feedback. A second manipulation involved changing the consistency of the predictive information in stage one and the outcome in stage two. The results show that participants, with higher probabilistic feedback, learned to utilise the valid cue. In inconsistent task conditions, however, participants were significantly less successful in utilising higher validity cues. We interpret this result as implying that learning in probabilistic categorization is based on developing a representation of the task that allows for goal-directed action.

  14. An auditory cue-depreciation effect.

    Science.gov (United States)

    Gibson, J M; Watkins, M J

    1991-01-01

    An experiment is reported in which subjects first heard a list of words and then tried to identify these same words from degraded utterances. Paralleling previous findings in the visual modality, the probability of identifying a given utterance was reduced when the utterance was immediately preceded by other, more degraded, utterances of the same word. A second experiment replicated this "cue-depreciation effect" and in addition found the effect to be weakened, if not eliminated, when the target word was not included in the initial list or when the test was delayed by two days.

  15. Setting and changing feature priorities in visual short-term memory.

    Science.gov (United States)

    Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin

    2017-04-01

    Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.

  16. The Effect of Eye Contact Is Contingent on Visual Awareness

    Directory of Open Access Journals (Sweden)

    Shan Xu

    2018-02-01

    Full Text Available The present study explored how eye contact at different levels of visual awareness influences gaze-induced joint attention. We adopted a spatial-cueing paradigm, in which an averted gaze was used as an uninformative central cue for a joint-attention task. Prior to the onset of the averted-gaze cue, either supraliminal (Experiment 1 or subliminal (Experiment 2 eye contact was presented. The results revealed a larger subsequent gaze-cueing effect following supraliminal eye contact compared to a no-contact condition. In contrast, the gaze-cueing effect was smaller in the subliminal eye-contact condition than in the no-contact condition. These findings suggest that the facilitation effect of eye contact on coordinating social attention depends on visual awareness. Furthermore, subliminal eye contact might have an impact on subsequent social attention processes that differ from supraliminal eye contact. This study highlights the need to further investigate the role of eye contact in implicit social cognition.

  17. Neural basis of uncertain cue processing in trait anxiety.

    Science.gov (United States)

    Zhang, Meng; Ma, Chao; Luo, Yanyan; Li, Ji; Li, Qingwei; Liu, Yijun; Ding, Cody; Qiu, Jiang

    2016-02-19

    Individuals with high trait anxiety form a non-clinical group with a predisposition for an anxiety-related bias in emotional and cognitive processing that is considered by some to be a prerequisite for psychiatric disorders. Anxious individuals tend to experience more worry under uncertainty, and processing uncertain information is an important, but often overlooked factor in anxiety. So, we decided to explore the brain correlates of processing uncertain information in individuals with high trait anxiety using the learn-test paradigm. Behaviorally, the percentages on memory test and the likelihood ratios of identifying novel stimuli under uncertainty were similar to the certain fear condition, but different from the certain neutral condition. The brain results showed that the visual cortex, bilateral fusiform gyrus, and right parahippocampal gyrus were active during the processing of uncertain cues. Moreover, we found that trait anxiety was positively correlated with the BOLD signal of the right parahippocampal gyrus during the processing of uncertain cues. No significant results were found in the amygdala during uncertain cue processing. These results suggest that memory retrieval is associated with uncertain cue processing, which is underpinned by over-activation of the right parahippocampal gyrus, in individuals with high trait anxiety.

  18. Barack Obama Blindness (BOB: Absence of visual awareness to a single object

    Directory of Open Access Journals (Sweden)

    Marjan ePersuh

    2016-03-01

    Full Text Available In two experiments we evaluated whether a perceiver’s prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB. Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  19. Cognitive control over visual food cue saliency is greater in reduced-overweight/obese but not in weight relapsed women: An EEG study.

    Science.gov (United States)

    Hume, David John; Howells, Fleur Margaret; Karpul, David; Rauch, H G Laurie; Kroff, Jacolene; Lambert, Estelle Victoria

    2015-12-01

    Poor weight management may relate to a reduction in neurobehavioural control over food intake and heightened reactivity of the brain's neural reward pathways. Here we explore the neurophysiology of food-related visual cue processing in weight reduced and weight relapsed women by assessing differences in cortical arousal and attentional processing using a food-Stroop paradigm. 51 women were recruited into 4 groups: reduced-weight participants (RED, n=14) compared to BMI matched low-weight controls (LW-CTL, n=18); and weight relapsed participants (REL, n=10) compared to BMI matched high-weight controls (HW-CTL, n=9). Eating behaviour and body image questionnaires were completed. Two Stroop tasks (one containing food images, the other containing neutral images) were completed with record of electroencephalography (EEG). Differences in cortical arousal were found in RED versus LW-CTL women, and were seen during food task execution only. Compared to their controls, RED women exhibited lower relative delta band power (p=0.01) and higher relative beta band power (p=0.01) over the right frontal cortex (F4). Within the RED group, delta band oscillations correlated positively with self-reported habitual fat intake and with body shape dissatisfaction. As compared to women matched for phenotype but with no history of weight reduction, reduced-overweight/obese women show increased neurobehavioural control over external food cues and the inhibition of reward-orientated feeding responses. Insight into these self-regulatory mechanisms which attenuate food cue saliency may aid in the development of cognitive remediation therapies which facilitate long-term weight loss. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    Science.gov (United States)

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.