WorldWideScience

Sample records for suprapostural visual tasks

  1. On the functional integration between postural and supra-postural tasks on the basis of contextual cues and task constraint.

    Science.gov (United States)

    de Lima, Andrea Cristina; de Azevedo Neto, Raymundo Machado; Teixeira, Luis Augusto

    2010-10-01

    In order to evaluate the effects of uncertainty about direction of mechanical perturbation and supra-postural task constraint on postural control, young adults had their upright stance perturbed while holding a tray in a horizontal position. Stance was perturbed by moving forward or backward a supporting platform, contrasting situations of certainty versus uncertainty of direction of displacement. Increased constraint on postural stability was imposed by a supra-postural task of equilibrating a cylinder on the tray. Performance was assessed through EMG of anterior leg muscles, angular displacement of the main joints involved in the postural reactions and displacement of the tray. Results showed that both certainty on the direction of perturbation and increased supra-postural task constraint led to decreased angular displacement of the knee and the hip. Furthermore, combination of certainty and high supra-postural task constraint produced shorter latency of muscular activation. Such postural responses were paralleled by decreased displacement of the tray. These results suggest a functional integration between the tasks, with central set priming reactive postural responses from contextual cues and increased stability demand. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Effects of Attentional Focus and Age on Suprapostural Task Performance and Postural Control

    Science.gov (United States)

    McNevin, Nancy; Weir, Patricia; Quinn, Tiffany

    2013-01-01

    Purpose: Suprapostural task performance (manual tracking) and postural control (sway and frequency) were examined as a function of attentional focus, age, and tracking difficulty. Given the performance benefits often found under external focus conditions, it was hypothesized that external focus instructions would promote superior tracking and…

  3. Visual tasks and postural sway in children with and without autism spectrum disorders.

    Science.gov (United States)

    Chang, Chih-Hui; Wade, Michael G; Stoffregen, Thomas A; Hsu, Chin-Yu; Pan, Chien-Yu

    2010-01-01

    We investigated the influences of two different suprapostural visual tasks, visual searching and visual inspection, on the postural sway of children with and without autism spectrum disorder (ASD). Sixteen ASD children (age=8.75±1.34 years; height=130.34±11.03 cm) were recruited from a local support group. Individuals with an intellectual disability as a co-occurring condition and those with severe behavior problems that required formal intervention were excluded. Twenty-two sex- and age-matched typically developing (TD) children (age=8.93±1.39 years; height=133.47±8.21 cm) were recruited from a local public elementary school. Postural sway was recorded using a magnetic tracking system (Flock of Birds, Ascension Technologies, Inc., Burlington, VT). Results indicated that the ASD children exhibited greater sway than the TD children. Despite this difference, both TD and ASD children showed reduced sway during the search task, relative to sway during the inspection task. These findings replicate those of Stoffregen et al. (2000), Stoffregen, Giveans, et al. (2009), Stoffregen, Villard, et al. (2009) and Prado et al. (2007) and extend them to TD children as well as ASD children. Both TD and ASD children were able to functionally modulate postural sway to facilitate the performance of a task that required higher perceptual effort. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Neural Correlates of Task Cost for Stance Control with an Additional Motor Task: Phase-Locked Electroencephalogram Responses

    Science.gov (United States)

    Hwang, Ing-Shiou; Huang, Cheng-Ya

    2016-01-01

    With appropriate reallocation of central resources, the ability to maintain an erect posture is not necessarily degraded by a concurrent motor task. This study investigated the neural control of a particular postural-suprapostural procedure involving brain mechanisms to solve crosstalk between posture and motor subtasks. Participants completed a single posture task and a dual-task while concurrently conducting force-matching and maintaining a tilted stabilometer stance at a target angle. Stabilometer movements and event-related potentials (ERPs) were recorded. The added force-matching task increased the irregularity of postural response rather than the size of postural response prior to force-matching. In addition, the added force-matching task during stabilometer stance led to marked topographic ERP modulation, with greater P2 positivity in the frontal and sensorimotor-parietal areas of the N1-P2 transitional phase and in the sensorimotor-parietal area of the late P2 phase. The time-frequency distribution of the ERP primary principal component revealed that the dual-task condition manifested more pronounced delta (1–4 Hz) and beta (13–35 Hz) synchronizations but suppressed theta activity (4–8 Hz) before force-matching. The dual-task condition also manifested coherent fronto-parietal delta activity in the P2 period. In addition to a decrease in postural regularity, this study reveals spatio-temporal and temporal-spectral reorganizations of ERPs in the fronto-sensorimotor-parietal network due to the added suprapostural motor task. For a particular set of postural-suprapostural task, the behavior and neural data suggest a facilitatory role of autonomous postural response and central resource expansion with increasing interregional interactions for task-shift and planning the motor-suprapostural task. PMID:27010634

  5. Neural basis of postural focus effect on concurrent postural and motor tasks: phase-locked electroencephalogram responses.

    Science.gov (United States)

    Huang, Cheng-Ya; Zhao, Chen-Guang; Hwang, Ing-Shiou

    2014-11-01

    Dual-task performance is strongly affected by the direction of attentional focus. This study investigated neural control of a postural-suprapostural procedure when postural focus strategy varied. Twelve adults concurrently conducted force-matching and maintained stabilometer stance with visual feedback on ankle movement (visual internal focus, VIF) and on stabilometer movement (visual external focus, VEF). Force-matching error, dynamics of ankle and stabilometer movements, and event-related potentials (ERPs) were registered. Postural control with VEF caused superior force-matching performance, more complex ankle movement, and stronger kinematic coupling between the ankle and stabilometer movements than postural control with VIF. The postural focus strategy also altered ERP temporal-spatial patterns. Postural control with VEF resulted in later N1 with less negativity around the bilateral fronto-central and contralateral sensorimotor areas, earlier P2 deflection with more positivity around the bilateral fronto-central and ipsilateral temporal areas, and late movement-related potential commencing in the left frontal-central area, as compared with postural control with VIF. The time-frequency distribution of the ERP principal component revealed phase-locked neural oscillations in the delta (1-4Hz), theta (4-7Hz), and beta (13-35Hz) rhythms. The delta and theta rhythms were more pronounced prior to the timing of P2 positive deflection, and beta rebound was greater after the completion of force-matching in VEF condition than VIF condition. This study is the first to reveal the neural correlation of postural focusing effect on a postural-suprapostural task. Postural control with VEF takes advantage of efficient task-switching to facilitate autonomous postural response, in agreement with the "constrained-action" hypothesis. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A design space of visualization tasks.

    Science.gov (United States)

    Schulz, Hans-Jörg; Nocke, Thomas; Heitzler, Magnus; Schumann, Heidrun

    2013-12-01

    Knowledge about visualization tasks plays an important role in choosing or building suitable visual representations to pursue them. Yet, tasks are a multi-faceted concept and it is thus not surprising that the many existing task taxonomies and models all describe different aspects of tasks, depending on what these task descriptions aim to capture. This results in a clear need to bring these different aspects together under the common hood of a general design space of visualization tasks, which we propose in this paper. Our design space consists of five design dimensions that characterize the main aspects of tasks and that have so far been distributed across different task descriptions. We exemplify its concrete use by applying our design space in the domain of climate impact research. To this end, we propose interfaces to our design space for different user roles (developers, authors, and end users) that allow users of different levels of expertise to work with it.

  7. Interference with olfactory memory by visual and verbal tasks.

    Science.gov (United States)

    Annett, J M; Cook, N M; Leslie, J C

    1995-06-01

    It has been claimed that olfactory memory is distinct from memory in other modalities. This study investigated the effectiveness of visual and verbal tasks in interfering with olfactory memory and included methodological changes from other recent studies. Subjects were allocated to one of four experimental conditions involving interference tasks [no interference task; visual task; verbal task; visual-plus-verbal task] and presented 15 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Recognition and recall performance both showed effects of interference of visual and verbal tasks but there was no effect for time of testing. While the results may be accommodated within a dual coding framework, further work is indicated to resolve theoretical issues relating to task complexity.

  8. 5D Task Analysis Visualization Tool, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The creation of a five-dimensional task analysis visualization (5D-TAV) software tool for Task Analysis and Workload Planning using multi-dimensional visualization...

  9. Early visual cortex reflects initiation and maintenance of task set

    Science.gov (United States)

    Elkhetali, Abdurahman S.; Vaden, Ryan J.; Pool, Sean M.

    2014-01-01

    The human brain is able to process information flexibly, depending on a person's task. The mechanisms underlying this ability to initiate and maintain a task set are not well understood, but they are important for understanding the flexibility of human behavior and developing therapies for disorders involving attention. Here we investigate the differential roles of early visual cortical areas in initiating and maintaining a task set. Using functional Magnetic Resonance Imaging (fMRI), we characterized three different components of task set-related, but trial-independent activity in retinotopically mapped areas of early visual cortex, while human participants performed attention demanding visual or auditory tasks. These trial-independent effects reflected: (1) maintenance of attention over a long duration, (2) orienting to a cue, and (3) initiation of a task set. Participants performed tasks that differed in the modality of stimulus to be attended (auditory or visual) and in whether there was a simultaneous distractor (auditory only, visual only, or simultaneous auditory and visual). We found that patterns of trial-independent activity in early visual areas (V1, V2, V3, hV4) depend on attended modality, but not on stimuli. Further, different early visual areas play distinct roles in the initiation of a task set. In addition, activity associated with maintaining a task set tracks with a participant's behavior. These results show that trial-independent activity in early visual cortex reflects initiation and maintenance of a person's task set. PMID:25485712

  10. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity.

    Science.gov (United States)

    Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P

    2018-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.

  11. Classification of visual and linguistic tasks using eye-movement features.

    Science.gov (United States)

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  12. Task context impacts visual object processing differentially across the cortex

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J.; Baker, Chris I.

    2014-01-01

    Perception reflects an integration of “bottom-up” (sensory-driven) and “top-down” (internally generated) signals. Although models of visual processing often emphasize the central role of feed-forward hierarchical processing, less is known about the impact of top-down signals on complex visual representations. Here, we investigated whether and how the observer’s goals modulate object processing across the cortex. We examined responses elicited by a diverse set of objects under six distinct tasks, focusing on either physical (e.g., color) or conceptual properties (e.g., man-made). Critically, the same stimuli were presented in all tasks, allowing us to investigate how task impacts the neural representations of identical visual input. We found that task has an extensive and differential impact on object processing across the cortex. First, we found task-dependent representations in the ventral temporal and prefrontal cortex. In particular, although object identity could be decoded from the multivoxel response within task, there was a significant reduction in decoding across tasks. In contrast, the early visual cortex evidenced equivalent decoding within and across tasks, indicating task-independent representations. Second, task information was pervasive and present from the earliest stages of object processing. However, although the responses of the ventral temporal, prefrontal, and parietal cortex enabled decoding of both the type of task (physical/conceptual) and the specific task (e.g., color), the early visual cortex was not sensitive to type of task and could only be used to decode individual physical tasks. Thus, object processing is highly influenced by the behavioral goal of the observer, highlighting how top-down signals constrain and inform the formation of visual representations. PMID:24567402

  13. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity

    OpenAIRE

    Künstler, E. C. S.; Finke, K.; Günther, A.; Klingner, C.; Witte, O.; Bublak, P.

    2017-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the ‘theory of visual attention’ (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual sh...

  14. Validating a visual version of the metronome response task.

    Science.gov (United States)

    Laflamme, Patrick; Seli, Paul; Smilek, Daniel

    2018-02-12

    The metronome response task (MRT)-a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome-was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers' designs require a visual-based primary task.

  15. Task-related modulation of visual neglect in cancellation tasks

    OpenAIRE

    Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon

    2008-01-01

    Unilateral neglect involves deficits of spatial exploration and awareness that do not always affect a fixed portion of extrapersonal space, but may vary with current stimulation and possibly with task demands. Here, we assessed any ‘top-down’, task-related influences on visual neglect, with novel experimental variants of the cancellation test. Many different versions of the cancellation test are used clinically, and can differ in the extent of neglect revealed, though the exact factors determ...

  16. Time-sharing visual and auditory tracking tasks

    Science.gov (United States)

    Tsang, Pamela S.; Vidulich, Michael A.

    1987-01-01

    An experiment is described which examined the benefits of distributing the input demands of two tracking tasks as a function of task integrality. Visual and auditory compensatory tracking tasks were utilized. Results indicate that presenting the two tracking signals in two input modalities did not improve time-sharing efficiency. This was attributed to the difficulty insensitivity phenomenon.

  17. 5D Task Analysis Visualization Tool Phase II, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The creation of a five-dimensional task analysis visualization (5D-TAV) software tool for Task Analysis and Workload Planning using multi-dimensional visualization...

  18. Brain functional network connectivity based on a visual task: visual information processing-related brain regions are significantly activated in the task state

    Directory of Open Access Journals (Sweden)

    Yan-li Yang

    2015-01-01

    Full Text Available It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception.

  19. Advert saliency distracts children's visual attention during task-oriented internet use

    Directory of Open Access Journals (Sweden)

    Nils eHolmberg

    2014-02-01

    Full Text Available The general research question of the present study was to assess the impact of visually salient online adverts on children's task-oriented internet use. In order to answer this question, an experimental study was constructed in which 9-year-old and 12-year-old Swedish children were asked to solve a number of tasks while interacting with a mockup website. In each trial, web adverts in several saliency conditions were presented. By both measuring children's task accuracy, as well as the visual processing involved in solving these tasks, this study allows us to infer how two types of visual saliency affect children's attentional behavior, and whether such behavioral effects also impacts their task performance. Analyses show that low-level visual features and task relevance in online adverts have different effects on performance measures and process measures respectively. Whereas task performance is stable with regard to several advert saliency conditions, a marked effect is seen on children's gaze behavior. On the other hand, task performance is shown to be more sensitive to individual differences such as age, gender and level of gaze control. The results provide evidence about cognitive and behavioral distraction effects in children's task-oriented internet use caused by visual saliency in online adverts. The experiment suggests that children to some extent are able to compensate for behavioral effects caused by distracting visual stimuli when solving prospective memory tasks. Suggestions are given for further research into the interdiciplinary area between media research and cognitive science.

  20. Influence of social presence on eye movements in visual search tasks.

    Science.gov (United States)

    Liu, Na; Yu, Ruifeng

    2017-12-01

    This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.

  1. Task-relevant perceptual features can define categories in visual memory too.

    Science.gov (United States)

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  2. Mixed Initiative Visual Analytics Using Task-Driven Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Cook, Kristin A.; Cramer, Nicholas O.; Israel, David; Wolverton, Michael J.; Bruce, Joseph R.; Burtner, Edwin R.; Endert, Alexander

    2015-12-07

    Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying models of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.

  3. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Task set induces dynamic reallocation of resources in visual short-term memory.

    Science.gov (United States)

    Sheremata, Summer L; Shomstein, Sarah

    2017-08-01

    Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.

  5. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  6. Investigating the visual span in comparative search: the effects of task difficulty and divided attention.

    Science.gov (United States)

    Pomplun, M; Reingold, E M; Shen, J

    2001-09-01

    In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.

  7. Task-related Functional Connectivity Dynamics in a Block-designed Visual Experiment

    Directory of Open Access Journals (Sweden)

    Xin eDi

    2015-09-01

    Full Text Available Studying task modulations of brain connectivity using functional magnetic resonance imaging (fMRI is critical to understand brain functions that support cognitive and affective processes. Existing methods such as psychophysiological interaction (PPI and dynamic causal modelling (DCM usually implicitly assume that the connectivity patterns are stable over a block-designed task with identical stimuli. However, this assumption lacks empirical verification on high-temporal resolution fMRI data with reliable data-driven analysis methods. The present study performed a detailed examination of dynamic changes of functional connectivity (FC in a simple block-designed visual checkerboard experiment with a sub-second sampling rate (TR = 0.645 s by estimating time-varying correlation coefficient (TVCC between BOLD responses of different brain regions. We observed reliable task-related FC changes (i.e., FCs were transiently decreased after task onset and went back to the baseline afterward among several visual regions of the bilateral middle occipital gyrus (MOG and the bilateral fusiform gyrus (FuG. Importantly, only the FCs between higher visual regions (MOG and lower visual regions (FuG exhibited such dynamic patterns. The results suggested that simply assuming a sustained FC during a task block may be insufficient to capture distinct task-related FC changes. The investigation of FC dynamics in tasks could improve our understanding of condition shifts and the coordination between different activated brain regions.

  8. The impact of task demand on visual word recognition.

    Science.gov (United States)

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  10. Task- and age-dependent effects of visual stimulus properties on children's explicit numerosity judgments.

    Science.gov (United States)

    Defever, Emmy; Reynvoet, Bert; Gebuis, Titia

    2013-10-01

    Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Slushy weightings for the optimal pilot model. [considering visual tracking task

    Science.gov (United States)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  12. Hierarchical organization of brain functional networks during visual tasks.

    Science.gov (United States)

    Zhuo, Zhao; Cai, Shi-Min; Fu, Zhong-Qian; Zhang, Jie

    2011-09-01

    The functional network of the brain is known to demonstrate modular structure over different hierarchical scales. In this paper, we systematically investigated the hierarchical modular organizations of the brain functional networks that are derived from the extent of phase synchronization among high-resolution EEG time series during a visual task. In particular, we compare the modular structure of the functional network from EEG channels with that of the anatomical parcellation of the brain cortex. Our results show that the modular architectures of brain functional networks correspond well to those from the anatomical structures over different levels of hierarchy. Most importantly, we find that the consistency between the modular structures of the functional network and the anatomical network becomes more pronounced in terms of vision, sensory, vision-temporal, motor cortices during the visual task, which implies that the strong modularity in these areas forms the functional basis for the visual task. The structure-function relationship further reveals that the phase synchronization of EEG time series in the same anatomical group is much stronger than that of EEG time series from different anatomical groups during the task and that the hierarchical organization of functional brain network may be a consequence of functional segmentation of the brain cortex.

  13. A Method to Train Marmosets in Visual Working Memory Task and Their Performance.

    Science.gov (United States)

    Nakamura, Katsuki; Koba, Reiko; Miwa, Miki; Yamaguchi, Chieko; Suzuki, Hiromi; Takemoto, Atsushi

    2018-01-01

    Learning and memory processes are similarly organized in humans and monkeys; therefore, monkeys can be ideal models for analyzing human aging processes and neurodegenerative diseases such as Alzheimer's disease. With the development of novel gene modification methods, common marmosets ( Callithrix jacchus ) have been suggested as an animal model for neurodegenerative diseases. Furthermore, the common marmoset's lifespan is relatively short, which makes it a practical animal model for aging. Working memory deficits are a prominent symptom of both dementia and aging, but no data are currently available for visual working memory in common marmosets. The delayed matching-to-sample task is a powerful tool for evaluating visual working memory in humans and monkeys; therefore, we developed a novel procedure for training common marmosets in such a task. Using visual discrimination and reversal tasks to direct the marmosets' attention to the physical properties of visual stimuli, we successfully trained 11 out of 13 marmosets in the initial stage of the delayed matching-to-sample task and provided the first available data on visual working memory in common marmosets. We found that the marmosets required many trials to initially learn the task (median: 1316 trials), but once the task was learned, the animals needed fewer trials to learn the task with novel stimuli (476 trials or fewer, with the exception of one marmoset). The marmosets could retain visual information for up to 16 s. Our novel training procedure could enable us to use the common marmoset as a useful non-human primate model for studying visual working memory deficits in neurodegenerative diseases and aging.

  14. Visual Motor and Perceptual Task Performance in Astigmatic Students

    Directory of Open Access Journals (Sweden)

    Erin M. Harvey

    2017-01-01

    Full Text Available Purpose. To determine if spectacle corrected and uncorrected astigmats show reduced performance on visual motor and perceptual tasks. Methods. Third through 8th grade students were assigned to the low refractive error control group (astigmatism < 1.00 D, myopia < 0.75 D, hyperopia < 2.50 D, and anisometropia < 1.50 D or bilateral astigmatism group (right and left eye ≥ 1.00 D based on cycloplegic refraction. Students completed the Beery-Buktenica Developmental Test of Visual Motor Integration (VMI and Visual Perception (VMIp. Astigmats were randomly assigned to testing with/without correction and control group was tested uncorrected. Analyses compared VMI and VMIp scores for corrected and uncorrected astigmats to the control group. Results. The sample included 333 students (control group 170, astigmats tested with correction 75, and astigmats tested uncorrected 88. Mean VMI score in corrected astigmats did not differ from the control group (p=0.829. Uncorrected astigmats had lower VMI scores than the control group (p=0.038 and corrected astigmats (p=0.007. Mean VMIp scores for uncorrected (p=0.209 and corrected astigmats (p=0.124 did not differ from the control group. Uncorrected astigmats had lower mean scores than the corrected astigmats (p=0.003. Conclusions. Uncorrected astigmatism influences visual motor and perceptual task performance. Previously spectacle treated astigmats do not show developmental deficits on visual motor or perceptual tasks when tested with correction.

  15. Body sway at sea for two visual tasks and three stance widths.

    Science.gov (United States)

    Stoffregen, Thomas A; Villard, Sebastien; Yu, Yawen

    2009-12-01

    On land, body sway is influenced by stance width (the distance between the feet) and by visual tasks engaged in during stance. While wider stance can be used to stabilize the body against ship motion and crewmembers are obliged to carry out many visual tasks while standing, the influence of these factors on the kinematics of body sway has not been studied at sea. Crewmembers of the RN Atlantis stood on a force plate from which we obtained data on the positional variability of the center of pressure (COP). The sea state was 2 on the Beaufort scale. We varied stance width (5 cm, 17 cm, and 30 cm) and the nature of the visual tasks. In the Inspection task, participants viewed a plain piece of white paper, while in the Search task they counted the number of target letters that appeared in a block of text. Search task performance was similar to reports from terrestrial studies. Variability of the COP position was reduced during the Search task relative to the Inspection task. Variability was also reduced during wide stance relative to narrow stance. The influence of stance width was greater than has been observed in terrestrial studies. These results suggest that two factors that influence postural sway on land (variations in stance width and in the nature of visual tasks) also influence sway at sea. We conclude that--in mild sea states--the influence of these factors is not suppressed by ship motion.

  16. Concrete and abstract visualizations in history learning tasks

    NARCIS (Netherlands)

    Prangsma, Maaike; Van Boxtel, Carla; Kanselaar, Gellof; Kirschner, Paul A.

    2010-01-01

    Prangsma, M. E., Van Boxtel, C. A. M., Kanselaar, G., & Kirschner, P. A. (2009). Concrete and abstract visualizations in history learning tasks. British Journal of Educational Psychology, 79, 371-387.

  17. Frequency modulation of neural oscillations according to visual task demands.

    Science.gov (United States)

    Wutz, Andreas; Melcher, David; Samaha, Jason

    2018-02-06

    Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.

  18. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    Science.gov (United States)

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  19. Asymmetrical learning between a tactile and visual serial RT task

    NARCIS (Netherlands)

    Abrahamse, E.L.; van der Lubbe, Robert Henricus Johannes; Verwey, Willem B.

    2007-01-01

    According to many researchers, implicit learning in the serial reaction-time task is predominantly motor based and therefore should be independent of stimulus modality. Previous research on the task, however, has focused almost completely on the visual domain. Here we investigated sequence learning

  20. Influence of visual feedback on human task performance in ITER remote handling

    Energy Technology Data Exchange (ETDEWEB)

    Schropp, Gwendolijn Y.R., E-mail: g.schropp@heemskerk-innovative.nl [Utrecht University, Utrecht (Netherlands); Heemskerk Innovative Technology, Noordwijk (Netherlands); Heemskerk, Cock J.M. [Heemskerk Innovative Technology, Noordwijk (Netherlands); Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann [Helmholtz Institute-Utrecht University, Utrecht (Netherlands); Elzendoorn, Ben S.Q. [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands); Bult, David [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer The performance of human operators in an ITER-like test facility for remote handling. Black-Right-Pointing-Pointer Different sources of visual feedback influence how fast one can complete a maintenance task. Black-Right-Pointing-Pointer Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  1. Influence of visual feedback on human task performance in ITER remote handling

    International Nuclear Information System (INIS)

    Schropp, Gwendolijn Y.R.; Heemskerk, Cock J.M.; Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann; Elzendoorn, Ben S.Q.; Bult, David

    2012-01-01

    Highlights: ► The performance of human operators in an ITER-like test facility for remote handling. ► Different sources of visual feedback influence how fast one can complete a maintenance task. ► Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  2. The influence of different doses of caffeine on visual task performance

    NARCIS (Netherlands)

    Lorist, MM; Snel, J; Ruijter, J

    1999-01-01

    Tn this study the influence of caffeine as an energy-increasing substance on visual information processing was examined. Subjects were presented with a dual-task consisting of two choice reaction time tasks. In addition, one of the tasks was presented at two levels of difficulty, influencing the

  3. Priming T2 in a Visual and Auditory Attentional Blink Task

    NARCIS (Netherlands)

    Burg, E. van der; Olivers, C.N.L.; Bronkhorst, A.W.; Theeuwes, J.

    2008-01-01

    Participants performed an attentional blink (AB) task including digits as targets and letters as distractors within the visual and auditory domains. Prior to the rapid serial visual presentation, a visual or auditory prime was presented in the form of a digit that was identical to the second target

  4. Predictive Validity And Usefulness Of Visual Scanning Task In Hiv ...

    African Journals Online (AJOL)

    The visual scanning task is a useful screening tool for brain damage in HIV/AIDS by inference from impairment of visual information processing and disturbances in perceptual mental strategies. There is progressive neuro-cognitive decline as the disease worsens. Keywords: brain, cognition, HIV/AIDS, predictive validity, ...

  5. The effect of haptic guidance and visual feedback on learning a complex tennis task.

    Science.gov (United States)

    Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert

    2013-11-01

    While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on

  6. Effect of visual feedback on brain activation during motor tasks: an FMRI study.

    Science.gov (United States)

    Noble, Jeremy W; Eng, Janice J; Boyd, Lara A

    2013-07-01

    This study examined the effect of visual feedback and force level on the neural mechanisms responsible for the performance of a motor task. We used a voxel-wise fMRI approach to determine the effect of visual feedback (with and without) during a grip force task at 35% and 70% of maximum voluntary contraction. Two areas (contralateral rostral premotor cortex and putamen) displayed an interaction between force and feedback conditions. When the main effect of feedback condition was analyzed, higher activation when visual feedback was available was found in 22 of the 24 active brain areas, while the two other regions (contralateral lingual gyrus and ipsilateral precuneus) showed greater levels of activity when no visual feedback was available. The results suggest that there is a potentially confounding influence of visual feedback on brain activation during a motor task, and for some regions, this is dependent on the level of force applied.

  7. Visual Saliency Prediction and Evaluation across Different Perceptual Tasks.

    Directory of Open Access Journals (Sweden)

    Shafin Rahman

    Full Text Available Saliency maps produced by different algorithms are often evaluated by comparing output to fixated image locations appearing in human eye tracking data. There are challenges in evaluation based on fixation data due to bias in the data. Properties of eye movement patterns that are independent of image content may limit the validity of evaluation results, including spatial bias in fixation data. To address this problem, we present modeling and evaluation results for data derived from different perceptual tasks related to the concept of saliency. We also present a novel approach to benchmarking to deal with some of the challenges posed by spatial bias. The results presented establish the value of alternatives to fixation data to drive improvement and development of models. We also demonstrate an approach to approximate the output of alternative perceptual tasks based on computational saliency and/or eye gaze data. As a whole, this work presents novel benchmarking results and methods, establishes a new performance baseline for perceptual tasks that provide an alternative window into visual saliency, and demonstrates the capacity for saliency to serve in approximating human behaviour for one visual task given data from another.

  8. Concurrent deployment of visual attention and response selection bottleneck in a dual-task: Electrophysiological and behavioural evidence.

    Science.gov (United States)

    Reimer, Christina B; Strobach, Tilo; Schubert, Torsten

    2017-12-01

    Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.

  9. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis.

    Science.gov (United States)

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN.

  10. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    Science.gov (United States)

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  11. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Visual Information and Support Surface for Postural Control in Visual Search Task.

    Science.gov (United States)

    Huang, Chia-Chun; Yang, Chih-Mei

    2016-10-01

    When standing on a reduced support surface, people increase their reliance on visual information to control posture. This assertion was tested in the current study. The effects of imposed motion and support surface on postural control during visual search were investigated. Twelve participants (aged 21 ± 1.8 years; six men and six women) stood on a reduced support surface (45% base of support). In a room that moved back and forth along the anteroposterior axis, participants performed visual search for a given letter in an article. Postural sway variability and head-room coupling were measured. The results of head-room coupling, but not postural sway, supported the assertion that people increase reliance on visual information when standing on a reduced support surface. Whether standing on a whole or reduced surface, people stabilized their posture to perform the visual search tasks. Compared to a fixed target, searching on a hand-held target showed greater head-room coupling when standing on a reduced surface. © The Author(s) 2016.

  13. How task demands shape brain responses to visual food cues.

    Science.gov (United States)

    Pohl, Tanja Maria; Tempelmann, Claus; Noesselt, Toemme

    2017-06-01

    Several previous imaging studies have aimed at identifying the neural basis of visual food cue processing in humans. However, there is little consistency of the functional magnetic resonance imaging (fMRI) results across studies. Here, we tested the hypothesis that this variability across studies might - at least in part - be caused by the different tasks employed. In particular, we assessed directly the influence of task set on brain responses to food stimuli with fMRI using two tasks (colour vs. edibility judgement, between-subjects design). When participants judged colour, the left insula, the left inferior parietal lobule, occipital areas, the left orbitofrontal cortex and other frontal areas expressed enhanced fMRI responses to food relative to non-food pictures. However, when judging edibility, enhanced fMRI responses to food pictures were observed in the superior and middle frontal gyrus and in medial frontal areas including the pregenual anterior cingulate cortex and ventromedial prefrontal cortex. This pattern of results indicates that task sets can significantly alter the neural underpinnings of food cue processing. We propose that judging low-level visual stimulus characteristics - such as colour - triggers stimulus-related representations in the visual and even in gustatory cortex (insula), whereas discriminating abstract stimulus categories activates higher order representations in both the anterior cingulate and prefrontal cortex. Hum Brain Mapp 38:2897-2912, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Combined factors effect of menstrual cycle and background noise on visual inspection task performance: a simulation-based task.

    Science.gov (United States)

    Wijayanto, Titis; Tochihara, Yutaka; Wijaya, Andi R; Hermawati, Setia

    2009-11-01

    It is well known that women are physiologically and psychologically influenced by the menstrual cycle. In addition, the presence of background noise may affect task performance. So far, it has proven difficult to describe how the menstrual cycle and background noise affect task performance; some researchers have found an increment of performance during menstruation or during the presence of noise, others found performance deterioration, while other still have reported no dominant effect either of the menstrual cycle in performance or of the presence of noise. However, no study to date has investigated the combinational effect between the menstrual cycle and the presence of background noise in task performance. Therefore, the purpose of this study was to examine the combined factor effect of menstrual cycle and background noise on visual inspection task performance indices by Signal Detection Theory (SDT) metrics: sensitivity index (d') and response criteria index (beta). For this purpose, ten healthy female students (21.5+/-1.08 years) with a regular menstrual cycle participated in this study. A VDT-based visual inspection task was used for the experiment in 3x2 factorial designs. Two factors, menstrual phase, pre-menstruation (PMS), menstruation (M), and post-menstruation (PM) and background noise, with 80 dB(A) background noise and without noise, were analyzed as the main factors in this study. The results concluded that the sensitivity index (d') of SDT was affected in all the menstrual cycle conditions (pbackground noise (pbackground noise was found in this study. On the other hand, no significant effect was observed in the subject's tendency in visual inspection, shown by beta along the menstrual cycle and the presence of background noise. According to the response criteria for each individual subject, the presence of noise affected the tendency of some subjects in detecting the object and making decision during the visual inspection task.

  15. Stimulus-dependent modulation of visual neglect in a touch-screen cancellation task.

    Science.gov (United States)

    Keller, Ingo; Volkening, Katharina; Garbacenkaite, Ruta

    2015-05-01

    Patients with left-sided neglect frequently show omissions and repetitive behavior on cancellation tests. Using a touch-screen-based cancellation task, we tested how visual feedback and distracters influence the number of omissions and perseverations. Eighteen patients with left-sided visual neglect and 18 healthy controls performed four different cancellation tasks on an iPad touch screen: no feedback (the display did not change during the task), visual feedback (touched targets changed their color from black to green), visual feedback with distracters (20 distracters were evenly embedded in the display; detected targets changed their color from black to green), vanishing targets (touched targets disappeared from the screen). Except for the condition with vanishing targets, neglect patients had significantly more omissions and perseverations than healthy controls in the remaining three subtests. Both conditions providing feedback by changing the target color showed the highest number of omissions. Erasure of targets nearly diminished omissions completely. The highest rate of perseverations was observed in the no-feedback condition. The implementation of distracters led to a moderate number of perseverations. Visual feedback without distracters and vanishing targets abolished perseverations nearly completely. Visual feedback and the presence of distracters aggravated hemispatial neglect. This finding is compatible with impaired disengagement from the ipsilesional side as an important factor of visual neglect. Improvement of cancellation behavior with vanishing targets could have therapeutic implications. (c) 2015 APA, all rights reserved).

  16. Effects of age and auditory and visual dual tasks on closed-road driving performance.

    Science.gov (United States)

    Chaparro, Alex; Wood, Joanne M; Carberry, Trent

    2005-08-01

    This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate

  17. Effects of visual and verbal interference tasks on olfactory memory: the role of task complexity.

    Science.gov (United States)

    Annett, J M; Leslie, J C

    1996-08-01

    Recent studies have demonstrated that visual and verbal suppression tasks interfere with olfactory memory in a manner which is partially consistent with a dual coding interpretation. However, it has been suggested that total task complexity rather than modality specificity of the suppression tasks might account for the observed pattern of results. This study addressed the issue of whether or not the level of difficulty and complexity of suppression tasks could explain the apparent modality effects noted in earlier experiments. A total of 608 participants were each allocated to one of 19 experimental conditions involving interference tasks which varied suppression type (visual or verbal), nature of complexity (single, double or mixed) and level of difficulty (easy, optimal or difficult) and presented with 13 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Both recognition and recall performance showed an overall effect for suppression nature, suppression level and time of testing with no effect for suppression type. The results lend only limited support to Paivio's (1986) dual coding theory, but have a number of characteristics which suggest that an adequate account of olfactory memory may be broadly similar to current theories of face and object recognition. All of these phenomena might be dealt with by an appropriately modified version of dual coding theory.

  18. The influence of time on task on mind wandering and visual working memory.

    Science.gov (United States)

    Krimsky, Marissa; Forster, Daniel E; Llabre, Maria M; Jha, Amishi P

    2017-12-01

    Working memory relies on executive resources for successful task performance, with higher demands necessitating greater resource engagement. In addition to mnemonic demands, prior studies suggest that internal sources of distraction, such as mind wandering (i.e., having off-task thoughts) and greater time on task, may tax executive resources. Herein, the consequences of mnemonic demand, mind wandering, and time on task were investigated during a visual working memory task. Participants (N=143) completed a delayed-recognition visual working memory task, with mnemonic load for visual objects manipulated across trials (1 item=low load; 2 items=high load) and subjective mind wandering assessed intermittently throughout the experiment using a self-report Likert-type scale (1=on-task, 6=off-task). Task performance (correct/incorrect response) and self-reported mind wandering data were evaluated by hierarchical linear modeling to track trial-by-trial fluctuations. Performance declined with greater time on task, and the rate of decline was steeper for high vs low load trials. Self-reported mind wandering increased over time, and significantly varied asa function of both load and time on task. Participants reported greater mind wandering at the beginning of the experiment for low vs. high load trials; however, with greater time on task, more mind wandering was reported during high vs. low load trials. These results suggest that the availability of executive resources in support of working memory maintenance processes fluctuates in a demand-sensitive manner with time on task, and may be commandeered by mind wandering. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    Science.gov (United States)

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  20. Cross-cultural differences for three visual memory tasks in Brazilian children.

    Science.gov (United States)

    Santos, F H; Mello, C B; Bueno, O F A; Dellatolas, G

    2005-10-01

    Norms for three visual memory tasks, including Corsi's block tapping test and the BEM 144 complex figures and visual recognition, were developed for neuropsychological assessment in Brazilian children. The tasks were measured in 127 children ages 7 to 10 years from rural and urban areas of the States of São Paulo and Minas Gerais. Analysis indicated age-related but not sex-related differences. A cross-cultural effect was observed in relation to copying and recall of Complex pictures. Different performances between rural and urban children were noted.

  1. Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture.

    Science.gov (United States)

    Bosen, Adam K; Fleming, Justin T; Brown, Sarah E; Allen, Paul D; O'Neill, William E; Paige, Gary D

    2016-12-01

    Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.

  2. Attention improves encoding of task-relevant features in the human visual cortex

    Science.gov (United States)

    Jehee, Janneke F.M.; Brady, Devin K.; Tong, Frank

    2011-01-01

    When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information. PMID:21632942

  3. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  4. Design and implementation of an interface supporting information navigation tasks using hyperbolic visualization technique

    International Nuclear Information System (INIS)

    Lee, J. K.; Choi, I. K.; Jun, S. H.; Park, K. O.; Seo, Y. S.; Seo, S. M.; Koo, I. S.; Jang, M. H.

    2001-01-01

    Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchially structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance, in this thesis, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks

  5. Effects of lighting and task parameters on visual acuity and performance

    Energy Technology Data Exchange (ETDEWEB)

    Halonen, L.

    1993-12-31

    Lighting and task parameters and their effects on visual acuity and visual performance are dealt with. The parameters studied are target contrast, target size and subject`s age; and also adaptation luminance, luminance ratio between task and its surrounding and temporal change in luminances are studied. Experiments were carried out to examine the effects of luminance and light spectrum on visual acuity. Young normally sighted, older and low vision people participated in the measurements. In the young and older subject groups the visual acuity remained unchanged at contrasts 0.93 and 0.63 at the luminance range of 15-630 cd/m{sub 2}. The results show that at contrasts 0.03-0.93 young and older subjects` visual acuity remained unchanged in the luminance range of 105-630 cd/m{sub 2}. In the low vision group, the changes in luminances between 25-860 cd/m{sub 2} did not have significant effects on visual acuity measured at high contrast 0.93, at low contrast, slight individual changes were found. The colour temperature of the light sources was altered between 2900-9500 K in the experiment. In the groups of the older, young and low vision subjects the light spectrum did not have significant effects on visual acuity, except for two retinitis pigmentosa subjects. On the basis of the visual acuity experiments, a three dimensional visual acuity model (VA-HUT) has been developed. The model predicts visual acuity as a function of luminance, target contrast and observer age. On the basis of visual acuity experiments visual acuity reserve values have been calculated for different text sizes

  6. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training.

    Science.gov (United States)

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.

  7. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    Directory of Open Access Journals (Sweden)

    Ineke eFengler

    2015-04-01

    Full Text Available Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 hours and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up on an audio-visual (i.e., faces and voices emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio-visual (i.e., tone bursts and light flashes discrimination task and two unimodal (one auditory and one visual perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seem to possibly prevail for longer durations.

  8. CREATING AUDIO VISUAL DIALOGUE TASK AS STUDENTS’ SELF ASSESSMENT TO ENHANCE THEIR SPEAKING ABILITY

    Directory of Open Access Journals (Sweden)

    Novia Trisanti

    2017-04-01

    Full Text Available The study is about giving overview of employing audio visual dialogue task as students creativity task and self assessment in EFL speaking class of tertiary education to enhance the students speaking ability. The qualitative research was done in one of the speaking classes at English Department, Semarang State University, Central Java, Indonesia. The results that can be seen from the rubric of self assessment show that the oral performance through audio visual recorded tasks done by the students as their self assessment gave positive evidences. The audio visual dialogue task can be very beneficial since it can motivate the students learning and increase their learning experiences. The self-assessment can be a valuable additional means to improve their speaking ability since it is one of the motives that drive self- evaluatioan, along with self- verification and self- enhancement.

  9. Task-dependent engagements of the primary visual cortex during kinesthetic and visual motor imagery.

    Science.gov (United States)

    Mizuguchi, Nobuaki; Nakamura, Maiko; Kanosue, Kazuyuki

    2017-01-01

    Motor imagery can be divided into kinesthetic and visual aspects. In the present study, we investigated excitability in the corticospinal tract and primary visual cortex (V1) during kinesthetic and visual motor imagery. To accomplish this, we measured motor evoked potentials (MEPs) and probability of phosphene occurrence during the two types of motor imageries of finger tapping. The MEPs and phosphenes were induced by transcranial magnetic stimulation to the primary motor cortex and V1, respectively. The amplitudes of MEPs and probability of phosphene occurrence during motor imagery were normalized based on the values obtained at rest. Corticospinal excitability increased during both kinesthetic and visual motor imagery, while excitability in V1 was increased only during visual motor imagery. These results imply that modulation of cortical excitability during kinesthetic and visual motor imagery is task dependent. The present finding aids in the understanding of the neural mechanisms underlying motor imagery and provides useful information for the use of motor imagery in rehabilitation or motor imagery training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Patterned-string tasks: relation between fine motor skills and visual-spatial abilities in parrots.

    Directory of Open Access Journals (Sweden)

    Anastasia Krasheninnikova

    Full Text Available String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla and the cockatiel (Nymphicus hollandicus, forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals.

  11. Lateralized visual behavior in bottlenose dolphins (Tursiops truncatus) performing audio-visual tasks: the right visual field advantage.

    Science.gov (United States)

    Delfour, F; Marten, K

    2006-01-10

    Analyzing cerebral asymmetries in various species helps in understanding brain organization. The left and right sides of the brain (lateralization) are involved in different cognitive and sensory functions. This study focuses on dolphin visual lateralization as expressed by spontaneous eye preference when performing a complex cognitive task; we examine lateralization when processing different visual stimuli displayed on an underwater touch-screen (two-dimensional figures, three-dimensional figures and dolphin/human video sequences). Three female bottlenose dolphins (Tursiops truncatus) were submitted to a 2-, 3- or 4-, choice visual/auditory discrimination problem, without any food reward: the subjects had to correctly match visual and acoustic stimuli together. In order to visualize and to touch the underwater target, the dolphins had to come close to the touch-screen and to position themselves using monocular vision (left or right eye) and/or binocular naso-ventral vision. The results showed an ability to associate simple visual forms and auditory information using an underwater touch-screen. Moreover, the subjects showed a spontaneous tendency to use monocular vision. Contrary to previous findings, our results did not clearly demonstrate right eye preference in spontaneous choice. However, the individuals' scores of correct answers were correlated with right eye vision, demonstrating the advantage of this visual field in visual information processing and suggesting a left hemispheric dominance. We also demonstrated that the nature of the presented visual stimulus does not seem to have any influence on the animals' monocular vision choice.

  12. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    Science.gov (United States)

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2

  13. Relationship between reaction time, fine motor control, and visual-spatial perception on vigilance and visual-motor tasks in 22q11.2 Deletion Syndrome.

    LENUS (Irish Health Repository)

    Howley, Sarah A

    2012-10-15

    22q11.2 Deletion Syndrome (22q11DS) is a common microdeletion disorder associated with mild to moderate intellectual disability and specific neurocognitive deficits, particularly in visual-motor and attentional abilities. Currently there is evidence that the visual-motor profile of 22q11DS is not entirely mediated by intellectual disability and that these individuals have specific deficits in visual-motor integration. However, the extent to which attentional deficits, such as vigilance, influence impairments on visual motor tasks in 22q11DS is unclear. This study examines visual-motor abilities and reaction time using a range of standardised tests in 35 children with 22q11DS, 26 age-matched typically developing (TD) sibling controls and 17 low-IQ community controls. Statistically significant deficits were observed in the 22q11DS group compared to both low-IQ and TD control groups on a timed fine motor control and accuracy task. The 22q11DS group performed significantly better than the low-IQ control group on an untimed drawing task and were equivalent to the TD control group on point accuracy and simple reaction time tests. Results suggest that visual motor deficits in 22q11DS are primarily attributable to deficits in psychomotor speed which becomes apparent when tasks are timed versus untimed. Moreover, the integration of visual and motor information may be intact and, indeed, represent a relative strength in 22q11DS when there are no time constraints imposed. While this may have significant implications for cognitive remediation strategies for children with 22q11DS, the relationship between reaction time, visual reasoning, cognitive complexity, fine motor speed and accuracy, and graphomotor ability on visual-motor tasks is still unclear.

  14. Sonification and haptic feedback in addition to visual feedback enhances complex motor task learning.

    Science.gov (United States)

    Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter

    2015-03-01

    Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.

  15. Task relevance differentially shapes ventral visual stream sensitivity to visible and invisible faces

    DEFF Research Database (Denmark)

    Kouider, Sid; Barbot, Antoine; Madsen, Kristoffer Hougaard

    2016-01-01

    requires dissociating it from the top-down influences underlying conscious recognition. Here, using visual masking to abolish perceptual consciousness in humans, we report that functional magnetic resonance imaging (fMRI) responses to invisible faces in the fusiform gyrus are enhanced when they are task...... relevance crucially shapes the sensitivity of fusiform regions to face stimuli, leading from enhancement to suppression of neural activity when the top-down influences accruing from conscious recognition are prevented.......Top-down modulations of the visual cortex can be driven by task relevance. Yet, several accounts propose that the perceptual inferences underlying conscious recognition involve similar top-down modulations of sensory responses. Studying the pure impact of task relevance on sensory responses...

  16. A dual-task investigation of automaticity in visual word processing

    Science.gov (United States)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  17. Cueing and Anxiety in a Visual Concept Learning Task.

    Science.gov (United States)

    Turner, Philip M.

    This study investigated the relationship of two anxiety measures (the State-Trait Anxiety Inventory-Trait Form and the S-R Inventory of Anxiousness-Exam Form) to performance on a visual concept-learning task with embedded criterial information. The effect on anxiety reduction of cueing criterial information was also examined, and two levels of…

  18. Coherent visualization of spatial data adapted to roles, tasks, and hardware

    Science.gov (United States)

    Wagner, Boris; Peinsipp-Byma, Elisabeth

    2012-06-01

    Modern crisis management requires that users with different roles and computer environments have to deal with a high volume of various data from different sources. For this purpose, Fraunhofer IOSB has developed a geographic information system (GIS) which supports the user depending on available data and the task he has to solve. The system provides merging and visualization of spatial data from various civilian and military sources. It supports the most common spatial data standards (OGC, STANAG) as well as some proprietary interfaces, regardless if these are filebased or database-based. To set the visualization rules generic Styled Layer Descriptors (SLDs) are used, which are an Open Geospatial Consortium (OGC) standard. SLDs allow specifying which data are shown, when and how. The defined SLDs consider the users' roles and task requirements. In addition it is possible to use different displays and the visualization also adapts to the individual resolution of the display. Too high or low information density is avoided. Also, our system enables users with different roles to work together simultaneously using the same data base. Every user is provided with the appropriate and coherent spatial data depending on his current task. These so refined spatial data are served via the OGC services Web Map Service (WMS: server-side rendered raster maps), or the Web Map Tile Service - (WMTS: pre-rendered and cached raster maps).

  19. Choosing Your Poison: Optimizing Simulator Visual System Selection as a Function of Operational Tasks

    Science.gov (United States)

    Sweet, Barbara T.; Kaiser, Mary K.

    2013-01-01

    Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.

  20. Individual personality differences in goats predict their performance in visual learning and non-associative cognitive tasks.

    Science.gov (United States)

    Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G

    2017-01-01

    Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Altered visual-spatial attention to task-irrelevant information is associated with falls risk in older adults

    Science.gov (United States)

    Nagamatsu, Lindsay S.; Munkacsy, Michelle; Liu-Ambrose, Teresa; Handy, Todd C.

    2014-01-01

    Executive cognitive functions play a critical role in falls risk – a pressing health care issue in seniors. In particular, intact attentional processing is integral for safe mobility and navigation. However, the specific contribution of impaired visual-spatial attention in falls remains unclear. In this study, we examined the association between visual-spatial attention to task-irrelevant stimuli and falls risk in community-dwelling older adults. Participants completed a visual target discrimination task at fixation while task-irrelevant probes were presented in both visual fields. We assessed attention to left and right peripheral probes using event-related potentials (ERPs). Falls risk was determined using the valid and reliable Physiological Profile Assessment (PPA). We found a significantly positive association between reduced attentional facilitation, as measured by the N1 ERP component, and falls risk. This relationship was specific to probes presented in the left visual field and measured at ipsilateral electrode sites. Our results suggest that fallers exhibit reduced attention to the left side of visual space and provide evidence that impaired right hemispheric function and/or structure may contribute to falls. PMID:24436970

  2. The Emergence of Visual Awareness: Temporal Dynamics in Relation to Task and Mask Type

    Science.gov (United States)

    Kiefer, Markus; Kammer, Thomas

    2017-01-01

    One aspect of consciousness phenomena, the temporal emergence of visual awareness, has been subject of a controversial debate. How can visual awareness, that is the experiential quality of visual stimuli, be characterized best? Is there a sharp discontinuous or dichotomous transition between unaware and fully aware states, or does awareness emerge gradually encompassing intermediate states? Previous studies yielded conflicting results and supported both dichotomous and gradual views. It is well conceivable that these conflicting results are more than noise, but reflect the dynamic nature of the temporal emergence of visual awareness. Using a psychophysical approach, the present research tested whether the emergence of visual awareness is context-dependent with a temporal two-alternative forced choice task. During backward masking of word targets, it was assessed whether the relative temporal sequence of stimulus thresholds is modulated by the task (stimulus presence, letter case, lexical decision, and semantic category) and by mask type. Four masks with different similarity to the target features were created. Psychophysical functions were then fitted to the accuracy data in the different task conditions as a function of the stimulus mask SOA in order to determine the inflection point (conscious threshold of each feature) and slope of the psychophysical function (transition from unaware to aware within each feature). Depending on feature-mask similarity, thresholds in the different tasks were highly dispersed suggesting a graded transition from unawareness to awareness or had less differentiated thresholds indicating that clusters of features probed by the tasks quite simultaneously contribute to the percept. The latter observation, although not compatible with the notion of a sharp all-or-none transition between unaware and aware states, suggests a less gradual or more discontinuous emergence of awareness. Analyses of slopes of the fitted psychophysical functions

  3. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    Science.gov (United States)

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. The activation of visual memory for facial identity is task-dependent: evidence from human electrophysiology.

    Science.gov (United States)

    Zimmermann, Friederike G S; Eimer, Martin

    2014-05-01

    The question whether the recognition of individual faces is mandatory or task-dependent is still controversial. We employed the N250r component of the event-related potential as a marker of the activation of representations of facial identity in visual memory, in order to find out whether identity-related information from faces is encoded and maintained even when facial identity is task-irrelevant. Pairs of faces appeared in rapid succession, and the N250r was measured in response to repetitions of the same individual face, as compared to presentations of two different faces. In Experiment 1, an N250r was present in an identity matching task where identity information was relevant, but not when participants had to detect infrequent targets (inverted faces), and facial identity was task-irrelevant. This was the case not only for unfamiliar faces, but also for famous faces, suggesting that even famous face recognition is not as automatic as is often assumed. In Experiment 2, an N250r was triggered by repetitions of non-famous faces in a task where participants had to match the view of each face pair, and facial identity had to be ignored. This shows that when facial features have to be maintained in visual memory for a subsequent comparison, identity-related information is retained as well, even when it is irrelevant. Our results suggest that individual face recognition is neither fully mandatory nor completely task-dependent. Facial identity is encoded and maintained in tasks that involve visual memory for individual faces, regardless of the to-be-remembered feature. In tasks without this memory component, irrelevant visual identity information can be completely ignored. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    Science.gov (United States)

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  6. Problem Behavior and Developmental Tasks in Adolescents with Visual Impairment and Sighted Peers

    Science.gov (United States)

    Pfeiffer, Jens P.; Pinquart, Martin

    2013-01-01

    This longitudinal study analyzed associations of problem behavior with the attainment of developmental tasks in 133 adolescents with visual impairment and 449 sighted peers. Higher levels of initial problem behavior predicted less progress in the attainment of developmental tasks at the one-year follow-up only in sighted adolescents. This…

  7. Analyzing Web pages visual scanpaths: between and within tasks variability.

    Science.gov (United States)

    Drusch, Gautier; Bastien, J M Christian

    2012-01-01

    In this paper, we propose a new method for comparing scanpaths in a bottom-up approach, and a test of the scanpath theory. To do so, we conducted a laboratory experiment in which 113 participants were invited to accomplish a set of tasks on two different websites. For each site, they had to perform two tasks that had to be repeated ounce. The data were analyzed using a procedure similar to the one used by Duchowski et al. [8]. The first step was to automatically identify, then label, AOIs with the mean-shift clustering procedure [19]. Then, scanpaths were compared two by two with a modified version of the string-edit method, which take into account the order of AOIs visualizations [2]. Our results show that scanpaths variability between tasks but within participants seems to be lower than the variability within task for a given participant. In other words participants seem to be more coherent when they perform different tasks, than when they repeat the same tasks. In addition, participants view more of the same AOI when they perform a different task on the same Web page than when they repeated the same task. These results are quite different from what predicts the scanpath theory.

  8. Aging increases distraction by auditory oddballs in visual, but not auditory tasks.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-05-01

    Aging is typically considered to bring a reduction of the ability to resist distraction by task-irrelevant stimuli. Yet recent work suggests that this conclusion must be qualified and that the effect of aging is mitigated by whether irrelevant and target stimuli emanate from the same modalities or from distinct ones. Some studies suggest that aging is especially sensitive to distraction within-modality while others suggest it is greater across modalities. Here we report the first study to measure the effect of aging on deviance distraction in cross-modal (auditory-visual) and uni-modal (auditory-auditory) oddball tasks. Young and older adults were asked to judge the parity of target digits (auditory or visual in distinct blocks of trials), each preceded by a task-irrelevant sound (the same tone on most trials-the standard sound-or, on rare and unpredictable trials, a burst of white noise-the deviant sound). Deviant sounds yielded distraction (longer response times relative to standard sounds) in both tasks and age groups. However, an age-related increase in distraction was observed in the cross-modal task and not in the uni-modal task. We argue that aging might affect processes involved in the switching of attention across modalities and speculate that this may due to the slowing of this type of attentional shift or a reduction in cognitive control required to re-orient attention toward the target's modality.

  9. A comparison of kinesthetic-tactual and visual displays via a critical tracking task. [for aircraft control

    Science.gov (United States)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.

    1979-01-01

    The feasibility of using the critical tracking task to evaluate kinesthetic-tactual displays was examined. The test subjects were asked to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. The results indicate that the critical tracking task is both a feasible and a reliable methodology for assessing tactual tracking. Further, that the critical tracking methodology is as sensitive and valid a measure of tactual tracking as visual tracking is demonstrated by the approximately equal effects of quickening for the tactual and visual displays.

  10. Eye vergence responses during a visual memory task.

    Science.gov (United States)

    Solé Puig, Maria; Romeo, August; Cañete Crespillo, Jose; Supèr, Hans

    2017-02-08

    In a previous report it was shown that covertly attending visual stimuli produce small convergence of the eyes, and that visual stimuli can give rise to different modulations of the angle of eye vergence, depending on their power to capture attention. Working memory is highly dependent on attention. Therefore, in this study we assessed vergence responses in a memory task. Participants scanned a set of 8 or 12 images for 10 s, and thereafter were presented with a series of single images. One half were repeat images - that is, they belonged to the initial set - and the other half were novel images. Participants were asked to indicate whether or not the images were included in the initial image set. We observed that eyes converge during scanning the set of images and during the presentation of the single images. The convergence was stronger for remembered images compared with the vergence for nonremembered images. Modulation in pupil size did not correspond to behavioural responses. The correspondence between vergence and coding/retrieval processes of memory strengthen the idea of a role for vergence in attention processing of visual information.

  11. Where perception meets memory: a review of repetition priming in visual search tasks.

    Science.gov (United States)

    Kristjánsson, Arni; Campana, Gianluca

    2010-01-01

    What we have recently seen and attended to strongly influences how we subsequently allocate visual attention. A clear example is how repeated presentation of an object's features or location in visual search tasks facilitates subsequent detection or identification of that item, a phenomenon known as priming. Here, we review a large body of results from priming studies that suggest that a short-term implicit memory system guides our attention to recently viewed items. The nature of this memory system and the processing level at which visual priming occurs are still debated. Priming might be due to activity modulations of low-level areas coding simple stimulus characteristics or to higher level episodic memory representations of whole objects or visual scenes. Indeed, recent evidence indicates that only minor changes to the stimuli used in priming studies may alter the processing level at which priming occurs. We also review recent behavioral, neuropsychological, and neurophysiological evidence that indicates that the priming patterns are reflected in activity modulations at multiple sites along the visual pathways. We furthermore suggest that studies of priming in visual search may potentially shed important light on the nature of cortical visual representations. Our conclusion is that priming occurs at many different levels of the perceptual hierarchy, reflecting activity modulations ranging from lower to higher levels, depending on the stimulus, task, and context-in fact, the neural loci that are involved in the analysis of the stimuli for which priming effects are seen.

  12. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    Science.gov (United States)

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  13. The time-course of activation in the dorsal and ventral visual streams during landmark cueing and perceptual discrimination tasks.

    Science.gov (United States)

    Lambert, Anthony J; Wootton, Adrienne

    2017-08-01

    Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Hand movement deviations in a visual search task with cross modal cuing

    Directory of Open Access Journals (Sweden)

    Hürol Aslan

    2007-01-01

    Full Text Available The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants’ reaction times, we paid special attention to tracking the hand movements toward the target. According to the results, the auditory stimuli unassociated with the target locations slightly –but significantly- increased the deviation of the hand movement from the path leading to the target location. The increase in the deviation depended on the degree of association between auditory stimuli and target locations, albeit not on the level of detail in the instructions about the task.

  15. The Impairing Effect of Mental Fatigue on Visual Sustained Attention under Monotonous Multi-Object Visual Attention Task in Long Durations: An Event-Related Potential Based Study.

    Directory of Open Access Journals (Sweden)

    Zizheng Guo

    Full Text Available The impairing effects of mental fatigue on visual sustained attention were assessed by event-related potentials (ERPs. Subjects performed a dual visual task, which includes a continuous tracking task (primary task and a random signal detection task (secondary task, for 63 minutes nonstop in order to elicit ERPs. In this period, the data such as subjective levels of mental fatigue, behavioral performance measures, and electroencephalograms were recorded for each subject. Comparing data from the first interval (0-25 min to that of the second, the following phenomena were observed: the subjective fatigue ratings increased with time, which indicates that performing the tasks leads to increase in mental fatigue levels; reaction times prolonged and accuracy rates decreased in the second interval, which indicates that subjects' sustained attention decreased.; In the ERP data, the P3 amplitudes elicited by the random signals decreased, while the P3 latencies increased in the second interval. These results suggest that mental fatigue can modulate the higher-level cognitive processes, in terms of less attentional resources allocated to the random stimuli, which leads to decreased speed in information evaluating and decision making against the stimuli. These findings provide new insights into the question that how mental fatigue affects visual sustained attention and, therefore, can help to design countermeasures to prevent accidents caused by low visual sustained attention.

  16. Illusory conjunctions and perceptual grouping in a visual search task in schizophrenia.

    Science.gov (United States)

    Carr, V J; Dewis, S A; Lewin, T J

    1998-07-27

    This report describes part of a series of experiments, conducted within the framework of feature integration theory, to determine whether patients with schizophrenia show deficits in preattentive processing. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing the frequency of illusory conjunctions (i.e. false perceptions) under conditions of divided attention (Experiment 3) and a task which examined the effects of perceptual grouping on illusory conjunctions (Experiment 4). We also assessed current symptomatology and its relationship to task performance. Contrary to our hypotheses, schizophrenia subjects did not show higher rates of illusory conjunctions, and the influence of perceptual grouping on the frequency of illusory conjunctions was similar for schizophrenia and control subjects. Nonetheless, specific predictions from feature integration theory about the impact of different target types (Experiment 3) and perceptual groups (Experiment 4) on the likelihood of forming an illusory conjunction were strongly supported, thereby confirming the integrity of the experimental procedures. Overall, these studies revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics.

  17. Sustained Attention in Auditory and Visual Monitoring Tasks: Evaluation of the Administration of a Rest Break or Exogenous Vibrotactile Signals.

    Science.gov (United States)

    Arrabito, G Robert; Ho, Geoffrey; Aghaei, Behzad; Burns, Catherine; Hou, Ming

    2015-12-01

    Performance and mental workload were observed for the administration of a rest break or exogenous vibrotactile signals in auditory and visual monitoring tasks. Sustained attention is mentally demanding. Techniques are required to improve observer performance in vigilance tasks. Participants (N = 150) monitored an auditory or a visual display for changes in signal duration in a 40-min watch. During the watch, participants were administered a rest break or exogenous vibrotactile signals. Detection accuracy was significantly greater in the auditory than in the visual modality. A short rest break restored detection accuracy in both sensory modalities following deterioration in performance. Participants experienced significantly lower mental workload when monitoring auditory than visual signals, and a rest break significantly reduced mental workload in both sensory modalities. Exogenous vibrotactile signals had no beneficial effects on performance, or mental workload. A rest break can restore performance in auditory and visual vigilance tasks. Although sensory differences in vigilance tasks have been studied, this study is the initial effort to investigate the effects of a rest break countermeasure in both auditory and visual vigilance tasks, and it is also the initial effort to explore the effects of the intervention of a rest break on the perceived mental workload of auditory and visual vigilance tasks. Further research is warranted to determine exact characteristics of effective exogenous vibrotactile signals in vigilance tasks. Potential applications of this research include procedures for decreasing the temporal decline in observer performance and the high mental workload imposed by vigilance tasks. © 2015, Her Majesty the Queen in Right of Canada, as represented by the Minister of National Defence.

  18. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    Science.gov (United States)

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-11-04

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.

  19. Task-specific impairments and enhancements induced by magnetic stimulation of human visual area V5.

    Science.gov (United States)

    Walsh, V; Ellison, A; Battelli, L; Cowey, A

    1998-03-22

    Transcranial magnetic stimulation (TMS) can be used to simulate the effects of highly circumscribed brain damage permanently present in some neuropsychological patients, by reversibly disrupting the normal functioning of the cortical area to which it is applied. By using TMS we attempted to recreate deficits similar to those reported in a motion-blind patient and to assess the specificity of deficits when TMS is applied over human area V5. We used six visual search tasks and showed that subjects were impaired in a motion but not a form 'pop-out' task when TMS was applied over V5. When motion was present, but irrelevant, or when attention to colour and form were required, TMS applied to V5 enhanced performance. When attention to motion was required in a motion-form conjunction search task, irrespective of whether the target was moving or stationary, TMS disrupted performance. These data suggest that attention to different visual attributes involves mutual inhibition between different extrastriate visual areas.

  20. The function of visual search and memory in sequential looking tasks

    NARCIS (Netherlands)

    J. Epelboim (Julie); R.M. Steinman (Robert); E. Kowler (Eileen); M. Edwards (Mark); Z. Pizlo (Zygmunt); D.W. Erkelens (Dirk Willem); H. Collewijn (Han)

    1995-01-01

    textabstractEye and head movements were recorded as unrestrained subjects tapped or only looked at nearby targets. Scanning patterns were the same in both tasks: subjects looked at each target before tapping it; visual search had similar speeds and gaze-shift accuracies. Looking however, took longer

  1. The effects of inspecting and constructing part-task-specific visualizations on team and individual learning

    NARCIS (Netherlands)

    Slof, Bert; Erkens, Gijsbert; Kirschner, Paul A.; Helms-Lorenz, Michelle

    This study examined whether inspecting and constructing different part-task-specific visualizations differentially affects learning. To this end, a complex business-economics problem was structured into three phase-related part-tasks: (1) determining core concepts, (2) proposing multiple solutions,

  2. Late Divergence of Target and Nontarget ERPs in a Visual Oddball Task

    Czech Academy of Sciences Publication Activity Database

    Damborská, A.; Brázdil, M.; Rektor, I.; Janoušová, E.; Chládek, Jan; Kukleta, M.

    2012-01-01

    Roč. 61, č. 3 (2012), s. 307-318 ISSN 0862-8408 Institutional support: RVO:68081731 Keywords : Intracerebral recording * Oddball task * Visual evoked potentials * Mental counting * Memory Subject RIV: CE - Biochemistry Impact factor: 1.531, year: 2012

  3. Task relevance of emotional information affects anxiety-linked attention bias in visual search.

    Science.gov (United States)

    Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies

    2017-01-01

    Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Neural circuits of eye movements during performance of the visual exploration task, which is similar to the responsive search score task, in schizophrenia patients and normal subjects

    International Nuclear Information System (INIS)

    Nemoto, Yasundo; Matsuda, Tetsuya; Matsuura, Masato

    2004-01-01

    Abnormal exploratory eye movements have been studied as a biological marker for schizophrenia. Using functional MRI (fMRI), we investigated brain activations of 12 healthy and 8 schizophrenic subjects during performance of a visual exploration task that is similar to the responsive search score task to clarify the neural basis of the abnormal exploratory eye movement. Performance data, such as the number of eye movements, the reaction time, and the percentage of correct answers showed no significant differences between the two groups. Only the normal subjects showed activations at the bilateral thalamus and the left anterior medial frontal cortex during the visual exploration tasks. In contrast, only the schizophrenic subjects showed activations at the right anterior cingulate gyms during the same tasks. The activation at the different locations between the two groups, the left anterior medial frontal cortex in normal subjects and the right anterior cingulate gyrus in schizophrenia subjects, was explained by the feature of the visual tasks. Hypoactivation at the bilateral thalamus supports a dysfunctional filtering theory of schizophrenia. (author)

  5. Transfer of an induced preferred retinal locus of fixation to everyday life visual tasks.

    Science.gov (United States)

    Barraza-Bernal, Maria J; Rifai, Katharina; Wahl, Siegfried

    2017-12-01

    Subjects develop a preferred retinal locus of fixation (PRL) under simulation of central scotoma. If systematic relocations are applied to the stimulus position, PRLs manifest at a location in favor of the stimulus relocation. The present study investigates whether the induced PRL is transferred to important visual tasks in daily life, namely pursuit eye movements, signage reading, and text reading. Fifteen subjects with normal sight participated in the study. To develop a PRL, all subjects underwent a scotoma simulation in a prior study, where five subjects were trained to develop the PRL in the left hemifield, five different subjects on the right hemifield, and the remaining five subjects could naturally chose the PRL location. The position of this PRL was used as baseline. Under central scotoma simulation, subjects performed a pursuit task, a signage reading task, and a reading-text task. In addition, retention of the behavior was also studied. Results showed that the PRL position was transferred to the pursuit task and that the vertical location of the PRL was maintained on the text reading task. However, when reading signage, a function-driven change in PRL location was observed. In addition, retention of the PRL position was observed over weeks and months. These results indicate that PRL positions can be induced and may further transferred to everyday life visual tasks, without hindering function-driven changes in PRL position.

  6. Assessment of brain damage in a geriatric population through use of a visual-searching task.

    Science.gov (United States)

    Turbiner, M; Derman, R M

    1980-04-01

    This study was designed to assess the discriminative capacity of a visual-searching task for brain damage, as described by Goldstein and Kyc (1978), for 10 hospitalized male, brain-damaged patients, 10 hospitalized male schizophrenic patients, and 10 normal subjects in a control group, all of whom were approximately 65 yr. old. The derived data indicated, at a statistically significant level, that the visual-searching task was effective in successfully classifying 80% of the brain-damaged sample when compared to the schizophrenic patients and discriminating 90% of the brain-damaged patients from normal subjects.

  7. Inferior frontal gyrus links visual and motor cortices during a visuomotor precision grip force task.

    Science.gov (United States)

    Papadelis, Christos; Arfeller, Carola; Erla, Silvia; Nollo, Giandomenico; Cattaneo, Luigi; Braun, Christoph

    2016-11-01

    Coordination between vision and action relies on a fronto-parietal network that receives visual and proprioceptive sensory input in order to compute motor control signals. Here, we investigated with magnetoencephalography (MEG) which cortical areas are functionally coupled on the basis of synchronization during visuomotor integration. MEG signals were recorded from twelve healthy adults while performing a unimanual visuomotor (VM) task and control conditions. The VM task required the integration of pinch motor commands with visual sensory feedback. By using a beamformer, we localized the neural activity in the frequency range of 1-30Hz during the VM compared to rest. Virtual sensors were estimated at the active locations. A multivariate autoregressive model was used to estimate the power and coherence of estimated activity at the virtual sensors. Event-related desynchronisation (ERD) during VM was observed in early visual areas, the rostral part of the left inferior frontal gyrus (IFG), the right IFG, the superior parietal lobules, and the left hand motor cortex (M1). Functional coupling in the alpha frequency band bridged the regional activities observed in motor and visual cortices (the start and the end points in the visuomotor loop) through the left or right IFG. Coherence between the left IFG and left M1 correlated inversely with the task performance. Our results indicate that an occipital-prefrontal-motor functional network facilitates the modulation of instructed motor responses to visual cues. This network may supplement the mechanism for guiding actions that is fully incorporated into the dorsal visual stream. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Psychophysical testing of visual prosthetic devices: a call to establish a multi-national joint task force

    Science.gov (United States)

    Rizzo, Joseph F., III; Ayton, Lauren N.

    2014-04-01

    Recent advances in the field of visual prostheses, as showcased in this special feature of Journal of Neural Engineering , have led to promising results from clinical trials of a number of devices. However, as noted by these groups there are many challenges involved in assessing vision of people with profound vision loss. As such, it is important that there is consistency in the methodology and reporting standards for clinical trials of visual prostheses and, indeed, the broader vision restoration research field. Two visual prosthesis research groups, the Boston Retinal Implant Project (BRIP) and Bionic Vision Australia (BVA), have agreed to work cooperatively to establish a multi-national Joint Task Force. The aim of this Task Force will be to develop a consensus statement to guide the methods used to conduct and report psychophysical and clinical results of humans who receive visual prosthetic devices. The overarching goal is to ensure maximum benefit to the implant recipients, not only in the outcomes of the visual prosthesis itself, but also in enabling them to obtain accurate information about this research with ease. The aspiration to develop a Joint Task Force was first promulgated at the inaugural 'The Eye and the Chip' meeting in September 2000. This meeting was established to promote the development of the visual prosthetic field by applying the principles of inclusiveness, openness, and collegiality among the growing body of researchers in this field. These same principles underlie the intent of this Joint Task Force to enhance the quality of psychophysical research within our community. Despite prior efforts, a critical mass of interested parties could not congeal. Renewed interest for developing joint guidelines has developed recently because of a growing awareness of the challenges of obtaining reliable measurements of visual function in patients who are severely visually impaired (in whom testing is inherently noisy), and of the importance of

  9. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    Science.gov (United States)

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors.

    Science.gov (United States)

    Sung, Kyongje; Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.

  11. Spatiotemporal oscillatory dynamics of visual selective attention during a flanker task.

    Science.gov (United States)

    McDermott, Timothy J; Wiesman, Alex I; Proskovec, Amy L; Heinrichs-Graham, Elizabeth; Wilson, Tony W

    2017-08-01

    The flanker task is a test of visual selective attention that has been widely used to probe error monitoring, response conflict, and related constructs. However, to date, few studies have focused on the selective attention component of this task and imaged the underlying oscillatory dynamics serving task performance. In this study, 21 healthy adults successfully completed an arrow-based version of the Eriksen flanker task during magnetoencephalography (MEG). All MEG data were pre-processed and transformed into the time-frequency domain. Significant oscillatory brain responses were imaged using a beamforming approach, and voxel time series were extracted from the peak responses to identify the temporal dynamics. Across both congruent and incongruent flanker conditions, our results indicated robust decreases in alpha (9-12Hz) activity in medial and lateral occipital regions, bilateral parietal cortices, and cerebellar areas during task performance. In parallel, increases in theta (3-7Hz) oscillatory activity were detected in dorsal and ventral frontal regions, and the anterior cingulate. As per conditional effects, stronger alpha responses (i.e., greater desynchronization) were observed in parietal, occipital, and cerebellar cortices during incongruent relative to congruent trials, whereas the opposite pattern emerged for theta responses (i.e., synchronization) in the anterior cingulate, left dorsolateral prefrontal, and ventral prefrontal cortices. Interestingly, the peak latency of theta responses in these latter brain regions was significantly correlated with reaction time, and may partially explain the amplitude difference observed between congruent and incongruent trials. Lastly, whole-brain exploratory analyses implicated the frontal eye fields, right temporoparietal junction, and premotor cortices. These findings suggest that regions of both the dorsal and ventral attention networks contribute to visual selective attention processes during incongruent trials

  12. Altered visual strategies and attention are related to increased force fluctuations during a pinch grip task in older adults.

    Science.gov (United States)

    Keenan, Kevin G; Huddleston, Wendy E; Ernest, Bradley E

    2017-11-01

    The purpose of the study was to determine the visual strategies used by older adults during a pinch grip task and to assess the relations between visual strategy, deficits in attention, and increased force fluctuations in older adults. Eye movements of 23 older adults (>65 yr) were monitored during a low-force pinch grip task while subjects viewed three common visual feedback displays. Performance on the Grooved Pegboard test and an attention task (which required no concurrent hand movements) was also measured. Visual strategies varied across subjects and depended on the type of visual feedback provided to the subjects. First, while viewing a high-gain compensatory feedback display (horizontal bar moving up and down with force), 9 of 23 older subjects adopted a strategy of performing saccades during the task, which resulted in 2.5 times greater force fluctuations in those that exhibited saccades compared with those who maintained fixation near the target line. Second, during pursuit feedback displays (force trace moving left to right across screen and up and down with force), all subjects exhibited multiple saccades, and increased force fluctuations were associated ( r s = 0.6; P = 0.002) with fewer saccades during the pursuit task. Also, decreased low-frequency (attention z scores. Comparison of these results with our previously published results in young subjects indicates that saccadic eye movements and attention are related to force control in older adults. NEW & NOTEWORTHY The significant contributions of the study are the addition of eye movement data and an attention task to explain differences in hand motor control across different visual displays in older adults. Older participants used different visual strategies across varying feedback displays, and saccadic eye movements were related with motor performance. In addition, those older individuals with deficits in attention had impaired motor performance on two different hand motor control tasks, including

  13. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    Science.gov (United States)

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  14. Exploring Metacogntive Visual Literacy Tasks for Teaching Astronomy

    Science.gov (United States)

    Slater, Timothy F.; Slater, S.; Dwyer, W.

    2010-01-01

    Undoubtedly, astronomy is a scientific enterprise which often results in colorful and inspirational images of the cosmos that naturally capture our attention. Students encountering astronomy in the college classroom are often bombarded with images, movies, simulations, conceptual cartoons, graphs, and charts intended to convey the substance and technological advancement inherent in astronomy. For students who self-identify themselves as visual learners, this aspect can make the science of astronomy come alive. For students who naturally attend to visual aesthetics, this aspect can make astronomy seem relevant. In other words, the visual nature that accompanies much of the scientific realm of astronomy has the ability to connect a wide range of students to science, not just those few who have great abilities and inclinations toward the mathematical analysis world. Indeed, this is fortunate for teachers of astronomy, who actively try to find ways to connect and build astronomical understanding with a broad range of student interests, motivations, and abilities. In the context of learning science, metacognition describes students’ self-monitoring, -regulation, and -awareness when thinking about learning. As such, metacognition is one of the foundational pillars supporting what we know about how people learn. Yet, the astronomy teaching and learning community knows very little about how to operationalize and support students’ metacognition in the classroom. In response, the Conceptual Astronomy, Physics and Earth sciences Research (CAPER) Team is developing and pilot-testing metacogntive tasks in the context of astronomy that focus on visual literacy of astronomical phenomena. In the initial versions, students are presented with a scientifically inaccurate narrative supposedly describing visual information, including images and graphical information, and asked to assess and correct the narrative, in the form of peer evaluation. To guide student thinking, students

  15. Divided visual attention: A comparison of patients with multiple sclerosis and controls, assessed with an optokinetic nystagmus suppression task.

    Science.gov (United States)

    Williams, Isla M; Schofield, Peter; Khade, Neha; Abel, Larry A

    2016-12-01

    Multiple sclerosis (MS) frequently causes impairment of cognitive function. We compared patients with MS with controls on divided visual attention tasks. The MS patients' and controls' stare optokinetic nystagmus (OKN) was recorded in response to a 24°/s full field stimulus. Suppression of the OKN response, judged by the gain, was measured during tasks dividing visual attention between the fixation target and a second stimulus, central or peripheral, static or dynamic. All participants completed the Audio Recorded Cognitive Screen. MS patients had lower gain on the baseline stare OKN. OKN suppression in divided attention tasks was the same in MS patients as in controls but in both groups was better maintained in static than in dynamic tasks. In only dynamic tasks, older age was associated with less effective OKN suppression. MS patients had lower scores on a timed attention task and on memory. There was no significant correlation between attention or memory and eye movement parameters. Attention, a complex multifaceted construct, has different neural combinations for each task. Despite impairments on some measures of attention, MS patients completed the divided visual attention tasks normally. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. The identification and modeling of visual cue usage in manual control task experiments

    Science.gov (United States)

    Sweet, Barbara Townsend

    Many fields of endeavor require humans to conduct manual control tasks while viewing a perspective scene. Manual control refers to tasks in which continuous, or nearly continuous, control adjustments are required. Examples include flying an aircraft, driving a car, and riding a bicycle. Perspective scenes can arise through natural viewing of the world, simulation of a scene (as in flight simulators), or through imaging devices (such as the cameras on an unmanned aerospace vehicle). Designers frequently have some degree of control over the content and characteristics of a perspective scene; airport designers can choose runway markings, vehicle designers can influence the size and shape of windows, as well as the location of the pilot, and simulator database designers can choose scene complexity and content. Little theoretical framework exists to help designers determine the answers to questions related to perspective scene content. An empirical approach is most commonly used to determine optimum perspective scene configurations. The goal of the research effort described in this dissertation has been to provide a tool for modeling the characteristics of human operators conducting manual control tasks with perspective-scene viewing. This is done for the purpose of providing an algorithmic, as opposed to empirical, method for analyzing the effects of changing perspective scene content for closed-loop manual control tasks. The dissertation contains the development of a model of manual control using a perspective scene, called the Visual Cue Control (VCC) Model. Two forms of model were developed: one model presumed that the operator obtained both position and velocity information from one visual cue, and the other model presumed that the operator used one visual cue for position, and another for velocity. The models were compared and validated in two experiments. The results show that the two-cue VCC model accurately characterizes the output of the human operator with a

  17. Beyond a mask and against the bottleneck: retroactive dual-task interference during working memory consolidation of a masked visual target.

    Science.gov (United States)

    Nieuwenstein, Mark; Wyble, Brad

    2014-06-01

    While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  18. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    OpenAIRE

    Zang, Xuelian; Geyer, Thomas; Assump??o, Leonardo; M?ller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the s...

  19. Nintendo Wii Balance Board is sensitive to effects of visual tasks on standing sway in healthy elderly adults.

    Science.gov (United States)

    Koslucher, Frank; Wade, Michael G; Nelson, Brent; Lim, Kelvin; Chen, Fu-Chen; Stoffregen, Thomas A

    2012-07-01

    Research has shown that the Nintendo Wii Balance Board (WBB) can reliably detect the quantitative kinematics of the center of pressure in stance. Previous studies used relatively coarse manipulations (1- vs. 2-leg stance, and eyes open vs. closed). We sought to determine whether the WBB could reliably detect postural changes associated with subtle variations in visual tasks. Healthy elderly adults stood on a WBB while performing one of two visual tasks. In the Inspection task, they maintained their gaze within the boundaries of a featureless target. In the Search task, they counted the occurrence of designated target letters within a block of text. Consistent with previous studies using traditional force plates, the positional variability of the center of pressure was reduced during performance of the Search task, relative to movement during performance of the Inspection task. Using detrended fluctuation analysis, a measure of movement dynamics, we found that COP trajectories were more predictable during performance of the Search task than during performance of the Inspection task. The results indicate that the WBB is sensitive to subtle variations in both the magnitude and dynamics of body sway that are related to variations in visual tasks engaged in during stance. The WBB is an inexpensive, reliable technology that can be used to evaluate subtle characteristics of body sway in large or widely dispersed samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    Science.gov (United States)

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can

  1. Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.

    Science.gov (United States)

    Smyth, Rachael E; Oram Cardy, Janis; Purcell, David

    2017-06-01

    This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.

  2. Task-set inertia and memory-consolidation bottleneck in dual tasks.

    Science.gov (United States)

    Koch, Iring; Rumiati, Raffaella I

    2006-11-01

    Three dual-task experiments examined the influence of processing a briefly presented visual object for deferred verbal report on performance in an unrelated auditory-manual reaction time (RT) task. RT was increased at short stimulus-onset asynchronies (SOAs) relative to long SOAs, showing that memory consolidation processes can produce a functional processing bottleneck in dual-task performance. In addition, the experiments manipulated the spatial compatibility of the orientation of the visual object and the side of the speeded manual response. This cross-task compatibility produced relative RT benefits only when the instruction for the visual task emphasized overlap at the level of response codes across the task sets (Experiment 1). However, once the effective task set was in place, it continued to produce cross-task compatibility effects even in single-task situations ("ignore" trials in Experiment 2) and when instructions for the visual task did not explicitly require spatial coding of object orientation (Experiment 3). Taken together, the data suggest a considerable degree of task-set inertia in dual-task performance, which is also reinforced by finding costs of switching task sequences (e.g., AC --> BC vs. BC --> BC) in Experiment 3.

  3. The contributions of visual and central attention to visual working memory.

    Science.gov (United States)

    Souza, Alessandra S; Oberauer, Klaus

    2017-10-01

    We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

  4. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    OpenAIRE

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnove; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dua...

  5. Visual Attention During Brand Choice : The Impact of Time Pressure and Task Motivation

    NARCIS (Netherlands)

    Pieters, R.; Warlop, L.

    1998-01-01

    Measures derived from eye-movement data reveal that during brand choice consumers adapt to time pressure by accelerating the visual scanning sequence, by filtering information and by changing their scanning strategy. In addition, consumers with high task motivation filter brand information less and

  6. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    Science.gov (United States)

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  7. Reverse alignment "mirror image" visualization as a laparoscopic training tool improves task performance.

    Science.gov (United States)

    Dunnican, Ward J; Singh, T Paul; Ata, Ashar; Bendana, Emma E; Conlee, Thomas D; Dolce, Charles J; Ramakrishnan, Rakesh

    2010-06-01

    Reverse alignment (mirror image) visualization is a disconcerting situation occasionally faced during laparoscopic operations. This occurs when the camera faces back at the surgeon in the opposite direction from which the surgeon's body and instruments are facing. Most surgeons will attempt to optimize trocar and camera placement to avoid this situation. The authors' objective was to determine whether the intentional use of reverse alignment visualization during laparoscopic training would improve performance. A standard box trainer was configured for reverse alignment, and 34 medical students and junior surgical residents were randomized to train with either forward alignment (DIRECT) or reverse alignment (MIRROR) visualization. Enrollees were tested on both modalities before and after a 4-week structured training program specific to their modality. Student's t test was used to determine differences in task performance between the 2 groups. Twenty-one participants completed the study (10 DIRECT, 11 MIRROR). There were no significant differences in performance time between DIRECT or MIRROR participants during forward or reverse alignment initial testing. At final testing, DIRECT participants had improved times only in forward alignment performance; they demonstrated no significant improvement in reverse alignment performance. MIRROR participants had significant time improvement in both forward and reverse alignment performance at final testing. Reverse alignment imaging for laparoscopic training improves task performance for both reverse alignment and forward alignment tasks. This may be translated into improved performance in the operating room when faced with reverse alignment situations. Minimal lab training can account for drastic adaptation to this environment.

  8. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision.

    Science.gov (United States)

    Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G

    2017-11-01

    Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements. Copyright © 2017. Published by Elsevier Ltd.

  9. Proactive interference does not meaningfully distort visual working memory capacity estimates in the canonical change detection task.

    Science.gov (United States)

    Lin, Po-Han; Luck, Steven J

    2012-01-01

    The change detection task has become a standard method for estimating the storage capacity of visual working memory. Most researchers assume that this task isolates the properties of an active short-term storage system that can be dissociated from long-term memory systems. However, long-term memory storage may influence performance on this task. In particular, memory traces from previous trials may create proactive interference that sometimes leads to errors, thereby reducing estimated capacity. Consequently, the capacity of visual working memory may be higher than is usually thought, and correlations between capacity and other measures of cognition may reflect individual differences in proactive interference rather than individual differences in the capacity of the short-term storage system. Indeed, previous research has shown that change detection performance can be influenced by proactive interference under some conditions. The purpose of the present study was to determine whether the canonical version of the change detection task - in which the to-be-remembered information consists of simple, briefly presented features - is influenced by proactive interference. Two experiments were conducted using methods that ordinarily produce substantial evidence of proactive interference, but no proactive interference was observed. Thus, the canonical version of the change detection task can be used to assess visual working memory capacity with no meaningful influence of proactive interference.

  10. Does Proactive Interference Play a Significant Role in Visual Working Memory Tasks?

    Science.gov (United States)

    Makovski, Tal

    2016-01-01

    Visual working memory (VWM) is an online memory buffer that is typically assumed to be immune to source memory confusions. Accordingly, the few studies that have investigated the role of proactive interference (PI) in VWM tasks found only a modest PI effect at best. In contrast, a recent study has found a substantial PI effect in that performance…

  11. Cultural differences in attention: Eye movement evidence from a comparative visual search task.

    Science.gov (United States)

    Alotaibi, Albandri; Underwood, Geoffrey; Smith, Alastair D

    2017-10-01

    Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  13. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback.

    Science.gov (United States)

    Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T

    2007-07-01

    Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.

  14. Functional Activation during the Rapid Visual Information Processing Task in a Middle Aged Cohort: An fMRI Study

    OpenAIRE

    Neale, Chris; Johnston, Patrick; Hughes, Matthew; Scholey, Andrew

    2015-01-01

    The Rapid Visual Information Processing (RVIP) task, a serial discrimination task where task performance believed to reflect sustained attention capabilities, is widely used in behavioural research and increasingly in neuroimaging studies. To date, functional neuroimaging research into the RVIP has been undertaken using block analyses, reflecting the sustained processing involved in the task, but not necessarily the transient processes associated with individual trial performance. Furthermore...

  15. Redefining the L2 Listening Construct within an Integrated Writing Task: Considering the Impacts of Visual-Cue Interpretation and Note-Taking

    Science.gov (United States)

    Cubilo, Justin; Winke, Paula

    2013-01-01

    Researchers debate whether listening tasks should be supported by visuals. Most empirical research in this area has been conducted on the effects of visual support on listening comprehension tasks employing multiple-choice questions. The present study seeks to expand this research by investigating the effects of video listening passages (vs.…

  16. The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.

    Science.gov (United States)

    Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli

    2009-11-18

    We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.

  17. Visual scanning training for neglect after stroke with and without a computerized lane tracking dual task

    Directory of Open Access Journals (Sweden)

    M.E. eVan Kessel

    2013-07-01

    Full Text Available Neglect patients typically fail to explore the contralesional half-space. During visual scanning training, these patients learn to consciously pay attention to contralesional target stimuli. It has been suggested that combining scanning training with methods addressing non-spatial attention might enhance training results. In the present study, a dual task training component was added to a visual scanning training (i.e. Training di Scanning Visuospaziale – TSVS; Pizzamiglio et al., 1990. Twenty-nine subacute right hemisphere stroke patients were semi-randomly assigned to an experimental (N=14 or a control group (N=15. Patients received 30 training sessions during six weeks. TSVS consisted of four standardized tasks (digit detection, reading/copying, copying drawings and figure description. Moreover, a driving simulator task was integrated in the training procedure. Control patients practiced a single lane tracking task for two days a week during six weeks. The experimental group was administered the same training schedule, but in weeks 4-6 of the training, the TSVS digit detection task was combined with lane tracking on the same projection screen, so as to create a dual task (CVRT-TR. Various neglect tests and driving simulator tasks were administered before and after training. No significant group and interaction effects were found that might reflect additional positive effects of dual task training. Significant improvements after training were observed in both groups taken together on most assessment tasks. Ameliorations were generally not correlated to post onset time, but spontaneous recovery, test-retest variability and learning effects could not be ruled out completely, since these were not controlled for. Future research might focus on increasing the amount of dual task training, the implementation of progressive difficulty levels in the driving simulator tasks and further exploration of relationships between dual task training and daily

  18. The Analysis of Task and Data Characteristic and the Collaborative Processing Method in Real-Time Visualization Pipeline of Urban 3DGIS

    Directory of Open Access Journals (Sweden)

    Dongbo Zhou

    2017-03-01

    Full Text Available Parallel processing in the real-time visualization of three-dimensional Geographic Information Systems (3DGIS has tended to concentrate on algorithm levels in recent years, and most of the existing methods employ multiple threads in a Central Processing Unit (CPU or kernel in a Graphics Processing Unit (GPU to improve efficiency in the computation of the Level of Details (LODs for three-dimensional (3D Models and in the display of Digital Elevation Models (DEMs and Digital Orthphoto Maps (DOMs. The systematic analysis of the task and data characteristics of parallelism in the real-time visualization of 3DGIS continues to fall behind the development of hardware. In this paper, the basic procedures of real-time visualization of urban 3DGIS are first reviewed, and then the real-time visualization pipeline is analyzed. Further, the pipeline is decomposed into different task stages based on the task order and the input-output dependency. Based on the analysis of task parallelism in different pipeline stages, the data parallelism characteristics in each task are summarized by studying the involved algorithms. Finally, this paper proposes a parallel co-processing mode and a collaborative strategy for real-time visualization of urban 3DGIS. It also provides a fundamental basis for developing parallel algorithms and strategies in 3DGIS.

  19. The effect of visual-motion time delays on pilot performance in a pursuit tracking task

    Science.gov (United States)

    Miller, G. K., Jr.; Riley, D. R.

    1976-01-01

    A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.

  20. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    Science.gov (United States)

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  1. Visual performance on detection tasks with double-targets of the same and different difficulty.

    Science.gov (United States)

    Chan, Alan H S; Courtney, Alan J; Ma, C W

    2002-10-20

    This paper reports a study of measurement of horizontal visual sensitivity limits for 16 subjects in single-target and double-targets detection tasks. Two phases of tests were conducted in the double-targets task; targets of the same difficulty were tested in phase one while targets of different difficulty were tested in phase two. The range of sensitivity for the double-targets test was found to be smaller than that for single-target in both the same and different target difficulty cases. The presence of another target was found to affect performance to a marked degree. Interference effect of the difficult target on detection of the easy one was greater than that of the easy one on the detection of the difficult one. Performance decrement was noted when correct percentage detection was plotted against eccentricity of target in both the single-target and double-targets tests. Nevertheless, the non-significant correlation found between the performance for the two tasks demonstrated that it was impossible to predict quantitatively ability for detection of double targets from the data for single targets. This indicated probable problems in generalizing data for single target visual lobes to those for multiple targets. Also lobe area values obtained from measurements using a single-target task cannot be applied in a mathematical model for situations with multiple occurrences of targets.

  2. Development of a standard methodology for optimizing remote visual display for nuclear maintenance tasks

    Science.gov (United States)

    Clarke, M. M.; Garin, J.; Prestonanderson, A.

    A fuel recycle facility being designed at Oak Ridge National Laboratory involves the Remotex concept: advanced servo-controlled master/slave manipulators, with remote television viewing, will totally replace direct human contact with the radioactive environment. The design of optimal viewing conditions is a critical component of the overall man/machine system. A methodology was developed for optimizing remote visual displays for nuclear maintenance tasks. The usefulness of this approach was demonstrated by preliminary specification of optimal closed circuit TV systems for such tasks.

  3. Alpha-Band Rhythms in Visual Task Performance: Phase-Locking by Rhythmic Sensory Stimulation

    Science.gov (United States)

    de Graaf, Tom A.; Gross, Joachim; Paterson, Gavin; Rusch, Tessa; Sack, Alexander T.; Thut, Gregor

    2013-01-01

    Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8–12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles. PMID:23555873

  4. Proactive interference does not meaningfully distort visual working memory capacity estimates in the canonical change detection task

    Directory of Open Access Journals (Sweden)

    Po-Han eLin

    2012-02-01

    Full Text Available The change detection task has become a standard method for estimating the storage capacity of visual working memory. Most researchers assume that this task isolates the properties of an active short-term storage system that can be dissociated from long-term memory systems. However, long-term memory storage may influence performance on this task. In particular, memory traces from previous trials may create proactive interference that sometimes leads to errors, thereby reducing estimated capacity. Consequently, the capacity of visual working memory may be higher than is usually thought, and correlations between capacity and other measures of cognition may reflect individual differences in proactive interference rather than individual differences in the capacity of the short-term storage system. Indeed, previous research has shown that change detection performance can be influenced by proactive interference under some conditions. The purpose of the present study was to determine whether the canonical version of the change detection task—in which the to-be-remembered information consists of simple, briefly presented features—is influenced by proactive interference. Two experiments were conducted using methods that ordinarily produce substantial evidence of proactive interference, but no proactive interference was observed. Thus, the canonical version of the change detection task can be used to assess visual working memory capacity with no meaningful influence of proactive interference.

  5. Correlation between observation task performance and visual acuity, contrast sensitivity and environmental light in a simulated maritime study.

    Science.gov (United States)

    Koefoed, Vilhelm F; Assmuss, Jörg; Høvding, Gunnar

    2018-03-25

    To examine the relevance of visual acuity (VA) and index of contrast sensitivity (ICS) as predictors for visual observation task performance in a maritime environment. Sixty naval cadets were recruited to a study on observation tasks in a simulated maritime environment under three different light settings. Their ICS were computed based on contrast sensitivity (CS) data recorded by Optec 6500 and CSV-1000E CS tests. The correlation between object identification distance and VA/ICS was examined by stepwise linear regression. The object detection distance was significantly correlated to the level of environmental light (p maritime environment may presumably be ascribed to the normal and uniform visual capacity in all our study subjects. © 2018 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  6. Executive function deficits in team sport athletes with a history of concussion revealed by a visual-auditory dual task paradigm.

    Science.gov (United States)

    Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa

    2017-02-01

    The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.

  7. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search.

    Science.gov (United States)

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.

  8. Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task

    Directory of Open Access Journals (Sweden)

    Jingyi S. Chia

    2017-06-01

    Full Text Available The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns, the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled (n = 12 and less skilled (n = 12 participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE duration (indicative of superior sports performance however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of

  9. Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task

    DEFF Research Database (Denmark)

    Fitzpatrick, C. M.; Caballero-Puntiverio, M.; Gether, U.

    2017-01-01

    Rationale The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen’s Theory of Visual Attention (TVA) to estimate visual processing speeds...... on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual...... modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. Results The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis...

  10. Task-Difficulty Homeostasis in Car Following Models: Experimental Validation Using Self-Paced Visual Occlusion.

    Directory of Open Access Journals (Sweden)

    Jami Pekkanen

    Full Text Available Car following (CF models used in traffic engineering are often criticized for not incorporating "human factors" well known to affect driving. Some recent work has addressed this by augmenting the CF models with the Task-Capability Interface (TCI model, by dynamically changing driving parameters as function of driver capability. We examined assumptions of these models experimentally using a self-paced visual occlusion paradigm in a simulated car following task. The results show strong, approximately one-to-one, correspondence between occlusion duration and increase in time headway. The correspondence was found between subjects and within subjects, on aggregate and individual sample level. The long time scale aggregate results support TCI-CF models that assume a linear increase in time headway in response to increased distraction. The short time scale individual sample level results suggest that drivers also adapt their visual sampling in response to transient changes in time headway, a mechanism which isn't incorporated in the current models.

  11. Secondary visual workload capability with primary visual and kinesthetic-tactual displays

    Science.gov (United States)

    Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.

    1978-01-01

    Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.

  12. Visual Attention Allocation Between Robotic Arm and Environmental Process Control: Validating the STOM Task Switching Model

    Science.gov (United States)

    Wickens, Christopher; Vieanne, Alex; Clegg, Benjamin; Sebok, Angelia; Janes, Jessica

    2015-01-01

    Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task.

  13. Poor Performance on Serial Visual Tasks in Persons with Reading Disabilities: Impaired Working Memory?

    Science.gov (United States)

    Ram-Tsur, Ronit; Faust, Miriam; Zivotofsky, Ari Z.

    2008-01-01

    The present study investigates the performance of persons with reading disabilities (PRD) on a variety of sequential visual-comparison tasks that have different working-memory requirements. In addition, mediating relationships between the sequential comparison process and attention and memory skills were looked for. Our findings suggest that PRD…

  14. Effects of display set size and its variability on the event-related potentials during a visual search task

    OpenAIRE

    Miyatani, Makoto; Sakata, Sumiko

    1999-01-01

    This study investigated the effects of display set size and its variability on the event-related potentials (ERPs) during a visual search task. In Experiment 1, subjects were required to respond if a visual display, which consisted of two, four, or six alphabets, contained one of two members of memory set. In Experiment 2, subjects detected the change of the shape of a fixation stimulus, which was surrounded by the same alphabets as in Experiment 1. In the search task (Experiment 1), the incr...

  15. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task.

    Science.gov (United States)

    Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.

  16. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    Science.gov (United States)

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  17. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    Science.gov (United States)

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  18. From foreground to background: how task-neutral context influences contextual cueing of visual search

    Directory of Open Access Journals (Sweden)

    Xuelian eZang

    2016-06-01

    Full Text Available Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang & Leung, 2005. Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003. In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1, on stereoscopically separated depth planes (Experiment 2, or spread over the entire display on the same depth plane (Experiment 3. Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90º or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.

  19. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    Science.gov (United States)

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530

  20. The functional neuroanatomy of multitasking: combining dual tasking with a short term memory task.

    Science.gov (United States)

    Deprez, Sabine; Vandenbulcke, Mathieu; Peeters, Ron; Emsell, Louise; Amant, Frederic; Sunaert, Stefan

    2013-09-01

    Insight into the neural architecture of multitasking is crucial when investigating the pathophysiology of multitasking deficits in clinical populations. Presently, little is known about how the brain combines dual-tasking with a concurrent short-term memory task, despite the relevance of this mental operation in daily life and the frequency of complaints related to this process, in disease. In this study we aimed to examine how the brain responds when a memory task is added to dual-tasking. Thirty-three right-handed healthy volunteers (20 females, mean age 39.9 ± 5.8) were examined with functional brain imaging (fMRI). The paradigm consisted of two cross-modal single tasks (a visual and auditory temporal same-different task with short delay), a dual-task combining both single tasks simultaneously and a multi-task condition, combining the dual-task with an additional short-term memory task (temporal same-different visual task with long delay). Dual-tasking compared to both individual visual and auditory single tasks activated a predominantly right-sided fronto-parietal network and the cerebellum. When adding the additional short-term memory task, a larger and more bilateral frontoparietal network was recruited. We found enhanced activity during multitasking in components of the network that were already involved in dual-tasking, suggesting increased working memory demands, as well as recruitment of multitask-specific components including areas that are likely to be involved in online holding of visual stimuli in short-term memory such as occipito-temporal cortex. These results confirm concurrent neural processing of a visual short-term memory task during dual-tasking and provide evidence for an effective fMRI multitasking paradigm. © 2013 Elsevier Ltd. All rights reserved.

  1. The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task

    Science.gov (United States)

    Miller, G. K., Jr.; Riley, D. R.

    1977-01-01

    An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.

  2. Functional interaction between right parietal and bilateral frontal cortices during visual search tasks revealed using functional magnetic imaging and transcranial direct current stimulation.

    Directory of Open Access Journals (Sweden)

    Amanda Ellison

    Full Text Available The existence of a network of brain regions which are activated when one undertakes a difficult visual search task is well established. Two primary nodes on this network are right posterior parietal cortex (rPPC and right frontal eye fields. Both have been shown to be involved in the orientation of attention, but the contingency that the activity of one of these areas has on the other is less clear. We sought to investigate this question by using transcranial direct current stimulation (tDCS to selectively decrease activity in rPPC and then asking participants to perform a visual search task whilst undergoing functional magnetic resonance imaging. Comparison with a condition in which sham tDCS was applied revealed that cathodal tDCS over rPPC causes a selective bilateral decrease in frontal activity when performing a visual search task. This result demonstrates for the first time that premotor regions within the frontal lobe and rPPC are not only necessary to carry out a visual search task, but that they work together to bring about normal function.

  3. Slow wave maturation on a visual working memory task.

    Science.gov (United States)

    Barriga-Paulino, Catarina I; Rodríguez-Martínez, Elena I; Rojas-Benjumea, Ma Ángeles; Gómez, Carlos M

    2014-07-01

    The purpose of the present study is to analyze how the Slow Wave develops in the retention period on a visual Delayed Match-to-Sample task performed by 170 subjects between 6 and 26 years old, divided into 5 age groups. In addition, a neuropsychological test (Working Memory Test Battery for Children) was correlated with this Event Related Potential (ERP) in order to observe possible relationships between Slow Wave maturation and the components of Baddeley and Hitch's Working Memory model. The results showed a slow negativity during the retention period in the posterior region in all the age groups, possibly resulting from sustained neural activity related to the visual item presented. In the anterior region, a positive slow wave was observed in the youngest subjects. Dipole analysis suggests that this fronto-central positivity in children (6-13 years old) consists of the positive side of the posterior negativity, once these subjects only needed two posterior dipoles to explain almost all the neural activity. Negative correlations were shown between the Slow Wave and the Working Memory Test Battery for Children, indicating a commonality in assessing Working Memory with the Slow Wave and the neuropsychological testing. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Examining the Use of a Visual Analytics System for Sensemaking Tasks: Case Studies with Domain Experts.

    Science.gov (United States)

    Kang, Youn-Ah; Stasko, J

    2012-12-01

    While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addressed. These findings inform design implications for future systems.

  5. Neural Correlates of Changes in a Visual Search Task due to Cognitive Training in Seniors

    Directory of Open Access Journals (Sweden)

    Nele Wild-Wall

    2012-01-01

    Full Text Available This study aimed to elucidate the underlying neural sources of near transfer after a multidomain cognitive training in older participants in a visual search task. Participants were randomly assigned to a social control, a no-contact control and a training group, receiving a 4-month paper-pencil and PC-based trainer guided cognitive intervention. All participants were tested in a before and after session with a conjunction visual search task. Performance and event-related potentials (ERPs suggest that the cognitive training improved feature processing of the stimuli which was expressed in an increased rate of target detection compared to the control groups. This was paralleled by enhanced amplitudes of the frontal P2 in the ERP and by higher activation in lingual and parahippocampal brain areas which are discussed to support visual feature processing. Enhanced N1 and N2 potentials in the ERP for nontarget stimuli after cognitive training additionally suggest improved attention and subsequent processing of arrays which were not immediately recognized as targets. Possible test repetition effects were confined to processes of stimulus categorisation as suggested by the P3b potential. The results show neurocognitive plasticity in aging after a broad cognitive training and allow pinpointing the functional loci of effects induced by cognitive training.

  6. Functional relationships between the hippocampus and dorsomedial striatum in learning a visual scene-based memory task in rats.

    Science.gov (United States)

    Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah

    2014-11-19

    The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.

  7. Task-irrelevant distractors in the delay period interfere selectively with visual short-term memory for spatial locations.

    Science.gov (United States)

    Marini, Francesco; Scott, Jerry; Aron, Adam R; Ester, Edward F

    2017-07-01

    Visual short-term memory (VSTM) enables the representation of information in a readily accessible state. VSTM is typically conceptualized as a form of "active" storage that is resistant to interference or disruption, yet several recent studies have shown that under some circumstances task-irrelevant distractors may indeed disrupt performance. Here, we investigated how task-irrelevant visual distractors affected VSTM by asking whether distractors induce a general loss of remembered information or selectively interfere with memory representations. In a VSTM task, participants recalled the spatial location of a target visual stimulus after a delay in which distractors were presented on 75% of trials. Notably, the distractor's eccentricity always matched the eccentricity of the target, while in the critical conditions the distractor's angular position was shifted either clockwise or counterclockwise relative to the target. We then computed estimates of recall error for both eccentricity and polar angle. A general interference model would predict an effect of distractors on both polar angle and eccentricity errors, while a selective interference model would predict effects of distractors on angle but not on eccentricity errors. Results showed that for stimulus angle there was an increase in the magnitude and variability of recall errors. However, distractors had no effect on estimates of stimulus eccentricity. Our results suggest that distractors selectively interfere with VSTM for spatial locations.

  8. Development of a standard methodology for optimizing remote visual display for nuclear-maintenance tasks

    International Nuclear Information System (INIS)

    Clarke, M.M.; Garin, J.; Preston-Anderson, A.

    1981-01-01

    The aim of the present study is to develop a methodology for optimizing remote viewing systems for a fuel recycle facility (HEF) being designed at Oak Ridge National Laboratory (ORNL). An important feature of this design involves the Remotex concept: advanced servo-controlled master/slave manipulators, with remote television viewing, will totally replace direct human contact with the radioactive environment. Therefore, the design of optimal viewing conditions is a critical component of the overall man/machine system. A methodology has been developed for optimizing remote visual displays for nuclear maintenance tasks. The usefulness of this approach has been demonstrated by preliminary specification of optimal closed circuit TV systems for such tasks

  9. Attentional Capture by Salient Distractors during Visual Search Is Determined by Temporal Task Demands

    DEFF Research Database (Denmark)

    Kiss, Monika; Grubert, Anna; Petersen, Anders

    2012-01-01

    The question whether attentional capture by salient but taskirrelevant visual stimuli is triggered in a bottom–up fashion or depends on top–down task settings is still unresolved. Strong support for bottom–up capture was obtained in the additional singleton task, in which search arrays were visible...... until response onset. Equally strong evidence for top–down control of attentional capture was obtained in spatial cueing experiments in which display durations were very brief. To demonstrate the critical role of temporal task demands on salience-driven attentional capture, we measured ERP indicators...... component that was followed by a late Pd component, suggesting that they triggered attentional capture, which was later replaced by location-specific inhibition. When search arrays were visible for only 200 msec, the distractor-elicited N2pc was eliminated and was replaced by a Pd component in the same time...

  10. Posing for awareness: proprioception modulates access to visual consciousness in a continuous flash suppression task.

    Science.gov (United States)

    Salomon, Roy; Lim, Melanie; Herbelin, Bruno; Hesselmann, Guido; Blanke, Olaf

    2013-06-03

    The rules governing the selection of which sensory information reaches consciousness are yet unknown. Of our senses, vision is often considered to be the dominant sense, and the effects of bodily senses, such as proprioception, on visual consciousness are frequently overlooked. Here, we demonstrate that the position of the body influences visual consciousness. We induced perceptual suppression by using continuous flash suppression. Participants had to judge the orientation a target stimulus embedded in a task-irrelevant picture of a hand. The picture of the hand could either be congruent or incongruent with the participants' actual hand position. When the viewed and the real hand positions were congruent, perceptual suppression was broken more rapidly than during incongruent trials. Our findings provide the first evidence of a proprioceptive bias in visual consciousness, suggesting that proprioception not only influences the perception of one's own body and self-consciousness, but also visual consciousness.

  11. The nature of impulsivity: visual exposure to natural environments decreases impulsive decision-making in a delay discounting task.

    Directory of Open Access Journals (Sweden)

    Meredith S Berry

    Full Text Available The benefits of visual exposure to natural environments for human well-being in areas of stress reduction, mood improvement, and attention restoration are well documented, but the effects of natural environments on impulsive decision-making remain unknown. Impulsive decision-making in delay discounting offers generality, predictive validity, and insight into decision-making related to unhealthy behaviors. The present experiment evaluated differences in such decision-making in humans experiencing visual exposure to one of the following conditions: natural (e.g., mountains, built (e.g., buildings, or control (e.g., triangles using a delay discounting task that required participants to choose between immediate and delayed hypothetical monetary outcomes. Participants viewed the images before and during the delay discounting task. Participants were less impulsive in the condition providing visual exposure to natural scenes compared to built and geometric scenes. Results suggest that exposure to natural environments results in decreased impulsive decision-making relative to built environments.

  12. Visual Scanning Patterns during the Dimensional Change Card Sorting Task in Children with Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Li Yi

    2012-01-01

    Full Text Available Impaired cognitive flexibility in children with autism spectrum disorder (ASD has been reported in previous literature. The present study explored ASD children’s visual scanning patterns during the Dimensional Change Card Sorting (DCCS task using eye-tracking technique. ASD and typical developing (TD children completed the standardized DCCS procedure on the computer while their eye movements were tracked. Behavioral results confirmed previous findings on ASD children’s deficits in executive function. ASD children’s visual scanning patterns also showed some specific underlying processes in the DCCS task compared to TD children. For example, ASD children looked shorter at the correct card in the postswitch phase and spent longer time at blank areas than TD children did. ASD children did not show a bias to the color dimension as TD children did. The correlations between the behavioral performance and eye moments were also discussed.

  13. Executive Function Is Necessary for Perspective Selection, Not Level-1 Visual Perspective Calculation: Evidence from a Dual-Task Study of Adults

    Science.gov (United States)

    Qureshi, Adam W.; Apperly, Ian A.; Samson, Dana

    2010-01-01

    Previous research suggests that perspective-taking and other "theory of mind" processes may be cognitively demanding for adult participants, and may be disrupted by concurrent performance of a secondary task. In the current study, a Level-1 visual perspective task was administered to 32 adults using a dual-task paradigm in which the secondary task…

  14. Beyond a Mask and Against the Bottleneck: Retroactive Dual-Task Interference During Working Memory Consolidation of a Masked Visual Target

    NARCIS (Netherlands)

    Nieuwenstein, Mark; Wyble, Brad

    While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result,

  15. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    Directory of Open Access Journals (Sweden)

    Nicholas T. Bott

    2017-06-01

    Full Text Available Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research.Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS.Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera.Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92. Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88. There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94. Significantly fewer data quality issues were encountered using the built-in web camera.Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as

  16. Attentional bias modification based on visual probe task: methodological issues, results and clinical relevance

    Directory of Open Access Journals (Sweden)

    Fernanda Machado Lopes

    2015-12-01

    Full Text Available Introduction: Attentional bias, the tendency that a person has to drive or maintain attention to a specific class of stimuli, may play an important role in the etiology and persistence of mental disorders. Attentional bias modification has been studied as a form of additional treatment related to automatic processing. Objectives: This systematic literature review compared and discussed methods, evidence of success and potential clinical applications of studies about attentional bias modification (ABM using a visual probe task. Methods: The Web of Knowledge, PubMed and PsycInfo were searched using the keywords attentional bias modification, attentional bias manipulation and attentional bias training. We selected empirical studies about ABM training using a visual probe task written in English and published between 2002 and 2014. Results: Fifty-seven studies met inclusion criteria. Most (78% succeeded in training attention in the predicted direction, and in 71% results were generalized to other measures correlated with the symptoms. Conclusions: ABM has potential clinical utility, but to standardize methods and maximize applicability, future studies should include clinical samples and be based on findings of studies about its effectiveness.

  17. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    Science.gov (United States)

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  18. Sonification of reference markers for auditory graphs: effects on non-visual point estimation tasks

    Directory of Open Access Journals (Sweden)

    Oussama Metatla

    2016-04-01

    Full Text Available Research has suggested that adding contextual information such as reference markers to data sonification can improve interaction with auditory graphs. This paper presents results of an experiment that contributes to quantifying and analysing the extent of such benefits for an integral part of interacting with graphed data: point estimation tasks. We examine three pitch-based sonification mappings; pitch-only, one-reference, and multiple-references that we designed to provide information about distance from an origin. We assess the effects of these sonifications on users’ performances when completing point estimation tasks in a between-subject experimental design against visual and speech control conditions. Results showed that the addition of reference tones increases users accuracy with a trade-off for task completion times, and that the multiple-references mapping is particularly effective when dealing with points that are positioned at the midrange of a given axis.

  19. Flexible attention allocation to visual and auditory working memory tasks : manipulating reward induces a trade-off

    NARCIS (Netherlands)

    Morey, Candice Coker; Cowan, Nelson; Morey, Richard D.; Rouder, Jeffery N.

    Prominent roles for general attention resources are posited in many models of working memory, but the manner in which these can be allocated differs between models or is not sufficiently specified. We varied the payoffs for correct responses in two temporally-overlapping recognition tasks, a visual

  20. Visual attention and emotional memory: recall of aversive pictures is partially mediated by concurrent task performance.

    Science.gov (United States)

    Pottage, Claire L; Schaefer, Alexandre

    2012-02-01

    The emotional enhancement of memory is often thought to be determined by attention. However, recent evidence using divided attention paradigms suggests that attention does not play a significant role in the formation of memories for aversive pictures. We report a study that investigated this question using a paradigm in which participants had to encode lists of randomly intermixed negative and neutral pictures under conditions of full attention and divided attention followed by a free recall test. Attention was divided by a highly demanding concurrent task tapping visual processing resources. Results showed that the advantage in recall for aversive pictures was still present in the DA condition. However, mediation analyses also revealed that concurrent task performance significantly mediated the emotional enhancement of memory under divided attention. This finding suggests that visual attentional processes play a significant role in the formation of emotional memories. PsycINFO Database Record (c) 2012 APA, all rights reserved

  1. Forward Models Applied in Visual Servoing for a Reaching Task in the iCub Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2009-01-01

    Full Text Available This paper details the application of a forward model to improve a reaching task. The reaching task must be accomplished by a humanoid robot with 53 degrees of freedom (d.o.f. and a stereo-vision system. We have explored via simulations a new way of constructing and utilizing a forward model that encodes eye–hand relationships. We constructed a forward model using the data obtained from only a single reaching attempt. ANFIS neural networks are used to construct the forward model, but the forward model is updated online with new information that comes from each reaching attempt. Using the obtained forward model, an initial image Jacobian is estimated and is used with a visual servoing controller. Simulation results demonstrate that errors are lower when the initial image Jacobian is derived from the forward model. This paper is one of the few attempts at applying visual servoing in a complete humanoid robot.

  2. Different levels of food restriction reveal genotype-specific differences in learning a visual discrimination task.

    Directory of Open Access Journals (Sweden)

    Kalina Makowiecki

    Full Text Available In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80-90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW. We used adult wildtype (WT; C57Bl/6j and knockout (ephrin-A2⁻/⁻ mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2⁻/⁻ mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies.

  3. Adults with dyslexia demonstrate large effects of crowding and detrimental effects of distractors in a visual tilt discrimination task.

    Directory of Open Access Journals (Sweden)

    Rizan Cassim

    Full Text Available Previous research has shown that adults with dyslexia (AwD are disproportionately impacted by close spacing of stimuli and increased numbers of distractors in a visual search task compared to controls [1]. Using an orientation discrimination task, the present study extended these findings to show that even in conditions where target search was not required: (i AwD had detrimental effects of both crowding and increased numbers of distractors; (ii AwD had more pronounced difficulty with distractor exclusion in the left visual field and (iii measures of crowding and distractor exclusion correlated significantly with literacy measures. Furthermore, such difficulties were not accounted for by the presence of covarying symptoms of ADHD in the participant groups. These findings provide further evidence to suggest that the ability to exclude distracting stimuli likely contributes to the reported visual attention difficulties in AwD and to the aetiology of literacy difficulties. The pattern of results is consistent with weaker and asymmetric attention in AwD.

  4. Visual search in barn owls: Task difficulty and saccadic behavior.

    Science.gov (United States)

    Orlowski, Julius; Ben-Shahar, Ohad; Wagner, Hermann

    2018-01-01

    How do we find what we are looking for? A target can be in plain view, but it may be detected only after extensive search. During a search we make directed attentional deployments like saccades to segment the scene until we detect the target. Depending on difficulty, the search may be fast with few attentional deployments or slow with many, shorter deployments. Here we study visual search in barn owls by tracking their overt attentional deployments-that is, their head movements-with a camera. We conducted a low-contrast feature search, a high-contrast orientation conjunction search, and a low-contrast orientation conjunction search, each with set sizes varying from 16 to 64 items. The barn owls were able to learn all of these tasks and showed serial search behavior. In a subsequent step, we analyzed how search behavior of owls changes with search complexity. We compared the search mechanisms in these three serial searches with results from pop-out searches our group had reported earlier. Saccade amplitude shortened and fixation duration increased in difficult searches. Also, in conjunction search saccades were guided toward items with shared target features. These data suggest that during visual search, barn owls utilize mechanisms similar to those that humans use.

  5. The visual attention span deficit in dyslexia is visual and not verbal.

    Science.gov (United States)

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  6. What Top-Down Task Sets Do for Us: An ERP Study on the Benefits of Advance Preparation in Visual Search

    Science.gov (United States)

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-01-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features.…

  7. Exploring the dynamics of balance data - movement variability in terms of drift and diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Gottschall, Julia [Institute of Physics, University of Oldenburg, D-26111 Oldenburg (Germany)], E-mail: julia.gottschall@uni-oldenburg.de; Peinke, Joachim [Institute of Physics, University of Oldenburg, D-26111 Oldenburg (Germany)], E-mail: peinke@uni-oldenburg.de; Lippens, Volker [Department of Human Movement, University of Hamburg, Moller Street 10, D-20148 Hamburg (Germany)], E-mail: vlippens@uni-hamburg.de; Nagel, Volker [Department of Human Movement, University of Hamburg, Moller Street 10, D-20148 Hamburg (Germany)

    2009-02-23

    We introduce a method to analyze postural control on a balance board by reconstructing the underlying dynamics in terms of a Langevin model. Drift and diffusion coefficients are directly estimated from the data and fitted by a suitable parametrization. The governing parameters are utilized to evaluate balance performance and the impact of supra-postural tasks on it. We show that the proposed method of analysis gives not only self-consistent results but also provides a plausible model for the reconstruction of balance dynamics.

  8. Exploring the dynamics of balance data - movement variability in terms of drift and diffusion

    International Nuclear Information System (INIS)

    Gottschall, Julia; Peinke, Joachim; Lippens, Volker; Nagel, Volker

    2009-01-01

    We introduce a method to analyze postural control on a balance board by reconstructing the underlying dynamics in terms of a Langevin model. Drift and diffusion coefficients are directly estimated from the data and fitted by a suitable parametrization. The governing parameters are utilized to evaluate balance performance and the impact of supra-postural tasks on it. We show that the proposed method of analysis gives not only self-consistent results but also provides a plausible model for the reconstruction of balance dynamics

  9. Conceptual and visual features contribute to visual memory for natural images.

    Directory of Open Access Journals (Sweden)

    Gesche M Huebner

    Full Text Available We examined the role of conceptual and visual similarity in a memory task for natural images. The important novelty of our approach was that visual similarity was determined using an algorithm [1] instead of being judged subjectively. This similarity index takes colours and spatial frequencies into account. For each target, four distractors were selected that were (1 conceptually and visually similar, (2 only conceptually similar, (3 only visually similar, or (4 neither conceptually nor visually similar to the target image. Participants viewed 219 images with the instruction to memorize them. Memory for a subset of these images was tested subsequently. In Experiment 1, participants performed a two-alternative forced choice recognition task and in Experiment 2, a yes/no-recognition task. In Experiment 3, testing occurred after a delay of one week. We analyzed the distribution of errors depending on distractor type. Performance was lowest when the distractor image was conceptually and visually similar to the target image, indicating that both factors matter in such a memory task. After delayed testing, these differences disappeared. Overall performance was high, indicating a large-capacity, detailed visual long-term memory.

  10. The Effect of Delayed Visual Feedback on Synchrony Perception in a Tapping Task

    Directory of Open Access Journals (Sweden)

    Mirjam Keetels

    2011-10-01

    Full Text Available Sensory events following a motor action are, within limits, interpreted as a causal consequence of those actions. For example, the clapping of the hands is initiated by the motor system, but subsequently visual, auditory, and tactile information is provided and processed. In the present study we examine the effect of temporal disturbances in this chain of motor-sensory events. Participants are instructed to tap a surface with their finger in synchrony with a chain of 20 sound clicks (ISI 750 ms. We examined the effect of additional visual information on this ‘tap-sound’-synchronization task. During tapping, subjects will see a video of their own tapping hand on a screen in front of them. The video can either be in synchrony with the tap (real-time recording, or can be slightly delayed (∼40–160 ms. In a control condition, no video is provided. We explore whether ‘tap-sound’ synchrony will be shifted as a function of the delayed visual feedback. Results will provide fundamental insights into how the brain preserves a causal interpretation of motor actions and their sensory consequences.

  11. Dexterity: A MATLAB-based analysis software suite for processing and visualizing data from tasks that measure arm or forelimb function.

    Science.gov (United States)

    Butensky, Samuel D; Sloan, Andrew P; Meyers, Eric; Carmel, Jason B

    2017-07-15

    Hand function is critical for independence, and neurological injury often impairs dexterity. To measure hand function in people or forelimb function in animals, sensors are employed to quantify manipulation. These sensors make assessment easier and more quantitative and allow automation of these tasks. While automated tasks improve objectivity and throughput, they also produce large amounts of data that can be burdensome to analyze. We created software called Dexterity that simplifies data analysis of automated reaching tasks. Dexterity is MATLAB software that enables quick analysis of data from forelimb tasks. Through a graphical user interface, files are loaded and data are identified and analyzed. These data can be annotated or graphed directly. Analysis is saved, and the graph and corresponding data can be exported. For additional analysis, Dexterity provides access to custom scripts created by other users. To determine the utility of Dexterity, we performed a study to evaluate the effects of task difficulty on the degree of impairment after injury. Dexterity analyzed two months of data and allowed new users to annotate the experiment, visualize results, and save and export data easily. Previous analysis of tasks was performed with custom data analysis, requiring expertise with analysis software. Dexterity made the tools required to analyze, visualize and annotate data easy to use by investigators without data science experience. Dexterity increases accessibility to automated tasks that measure dexterity by making analysis of large data intuitive, robust, and efficient. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The effect of stimulus duration and motor response in hemispatial neglect during a visual search task.

    Directory of Open Access Journals (Sweden)

    Laura M Jelsone-Swain

    Full Text Available Patients with hemispatial neglect exhibit a myriad of profound deficits. A hallmark of this syndrome is the patients' absence of awareness of items located in their contralesional space. Many studies, however, have demonstrated that neglect patients exhibit some level of processing of these neglected items. It has been suggested that unconscious processing of neglected information may manifest as a fast denial. This theory of fast denial proposes that neglected stimuli are detected in the same way as non-neglected stimuli, but without overt awareness. We evaluated the fast denial theory by conducting two separate visual search task experiments, each differing by the duration of stimulus presentation. Specifically, in Experiment 1 each stimulus remained in the participants' visual field until a response was made. In Experiment 2 each stimulus was presented for only a brief duration. We further evaluated the fast denial theory by comparing verbal to motor task responses in each experiment. Overall, our results from both experiments and tasks showed no evidence for the presence of implicit knowledge of neglected stimuli. Instead, patients with neglect responded the same when they neglected stimuli as when they correctly reported stimulus absence. These findings thus cast doubt on the concept of the fast denial theory and its consequent implications for non-conscious processing. Importantly, our study demonstrated that the only behavior affected was during conscious detection of ipsilesional stimuli. Specifically, patients were slower to detect stimuli in Experiment 1 compared to Experiment 2, suggesting a duration effect occurred during conscious processing of information. Additionally, reaction time and accuracy were similar when reporting verbally versus motorically. These results provide new insights into the perceptual deficits associated with neglect and further support other work that falsifies the fast denial account of non

  13. Interference control theory : A new perspective on dual-task interference in memorizing and responding to visual targets

    NARCIS (Netherlands)

    Nieuwenstein, Mark; Scholz, Sabine; Broers, Nico

    2015-01-01

    In a recent study, Nieuwenstein and Wyble (2014) showed that the consolidation of a masked visual target can be disrupted for up to one second by a trailing 2-alternative forced choice task. Aside from demonstrating that working memory consolidation involves a time-consuming process that continues

  14. A Multi-Area Stochastic Model for a Covert Visual Search Task.

    Directory of Open Access Journals (Sweden)

    Michael A Schwemmer

    Full Text Available Decisions typically comprise several elements. For example, attention must be directed towards specific objects, their identities recognized, and a choice made among alternatives. Pairs of competing accumulators and drift-diffusion processes provide good models of evidence integration in two-alternative perceptual choices, but more complex tasks requiring the coordination of attention and decision making involve multistage processing and multiple brain areas. Here we consider a task in which a target is located among distractors and its identity reported by lever release. The data comprise reaction times, accuracies, and single unit recordings from two monkeys' lateral interparietal area (LIP neurons. LIP firing rates distinguish between targets and distractors, exhibit stimulus set size effects, and show response-hemifield congruence effects. These data motivate our model, which uses coupled sets of leaky competing accumulators to represent processes hypothesized to occur in feature-selective areas and limb motor and pre-motor areas, together with the visual selection process occurring in LIP. Model simulations capture the electrophysiological and behavioral data, and fitted parameters suggest that different connection weights between LIP and the other cortical areas may account for the observed behavioral differences between the animals.

  15. Reduced plantar sole sensitivity facilitates early adaptation to a visual rotation pointing task when standing upright

    Directory of Open Access Journals (Sweden)

    Maxime Billot

    2016-09-01

    Full Text Available Humans are capable of pointing to a target with accuracy. However, when vision is distorted through a visual rotation or mirror-reversed vision, the performance is initially degraded and thereafter improves with practice. There are suggestions this gradual improvement results from a sensorimotor recalibration involving initial gating of the somatosensory information from the pointing hand. In the present experiment, we examined if this process interfered with balance control by asking participants to point to targets with a visual rotation from a standing posture. This duality in processing sensory information (i.e., gating sensory signals from the hand while processing those arising from the control of balance could generate initial interference leading to a degraded pointing performance. We hypothesized that if this is the case, the attenuation of plantar sole somatosensory information through cooling could reduce the sensorimotor interference, and facilitate the early adaptation (i.e. improvement in the pointing task. Results supported this hypothesis. These observations suggest that processing sensory information for balance control interferes with the sensorimotor recalibration process imposed by a pointing task when vision is rotated.

  16. CHARACTERIZATION OF THE EFFECTS OF INHALED PERCHLOROETHYLENE ON SUSTAINED ATTENTION IN RATS PERFORMING A VISUAL SIGNAL DETECTION TASK

    Science.gov (United States)

    The aliphatic hydrocarbon perchloroethyelene (PCE) has been associated with neurobehavioral dysfunction including reduced attention in humans. The current study sought to assess the effects of inhaled PCE on sustained attention in rats performing a visual signal detection task (S...

  17. Performance of brain-damaged, schizophrenic, and normal subjects on a visual searching task.

    Science.gov (United States)

    Goldstein, G; Kyc, F

    1978-06-01

    Goldstein, Rennick, Welch, and Shelly (1973) reported on a visual searching task that generated 94.1% correct classifications when comparing brain-damaged and normal subjects, and 79.4% correct classifications when comparing brain-damaged and psychiatric patients. In the present study, representing a partial cross-validation with some modification of the test procedure, comparisons were made between brain-damaged and schizophrenic, and brain-damaged and normal subjects. There were 92.5% correct classifications for the brain-damaged vs normal comparison, and 82.5% correct classifications for the brain-damaged vs schizophrenic comparison.

  18. Alterations in task-induced activity and resting-state fluctuations in visual and DMN areas revealed in long-term meditators.

    Science.gov (United States)

    Berkovich-Ohana, Aviva; Harel, Michal; Hahamy, Avital; Arieli, Amos; Malach, Rafael

    2016-07-15

    Recently we proposed that the information contained in spontaneously emerging (resting-state) fluctuations may reflect individually unique neuro-cognitive traits. One prediction of this conjecture, termed the "spontaneous trait reactivation" (STR) hypothesis, is that resting-state activity patterns could be diagnostic of unique personalities, talents and life-styles of individuals. Long-term meditators could provide a unique experimental group to test this hypothesis. Using fMRI we found that, during resting-state, the amplitude of spontaneous fluctuations in long-term mindfulness meditation (MM) practitioners was enhanced in the visual cortex and significantly reduced in the DMN compared to naïve controls. Importantly, during a visual recognition memory task, the MM group showed heightened visual cortex responsivity, concomitant with weaker negative responses in Default Mode Network (DMN) areas. This effect was also reflected in the behavioral performance, where MM practitioners performed significantly faster than the control group. Thus, our results uncover opposite changes in the visual and default mode systems in long-term meditators which are revealed during both rest and task. The results support the STR hypothesis and extend it to the domain of local changes in the magnitude of the spontaneous fluctuations. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. UNDERSTANDING PROSE THROUGH TASK ORIENTED AUDIO-VISUAL ACTIVITY: AN AMERICAN MODERN PROSE COURSE AT THE FACULTY OF LETTERS, PETRA CHRISTIAN UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Sarah Prasasti

    2001-01-01

    Full Text Available The method presented here provides the basis for a course in American prose for EFL students. Understanding and appreciation of American prose is a difficult task for the students because they come into contact with works that are full of cultural baggage and far apart from their own world. The audio visual aid is one of the alternatives to sensitize the students to the topic and the cultural background. Instead of proving the ready-made audio visual aids, teachers can involve students to actively engage in a more task oriented audiovisual project. Here, the teachers encourage their students to create their own audio visual aids using colors, pictures, sound and gestures as a point of initiation for further discussion. The students can use color that has become a strong element of fiction to help them calling up a forceful visual representation. Pictures can also stimulate the students to build their mental image. Sound and silence, which are a part of the fabric of literature, may also help them to increase the emotional impact.

  20. Object representations in visual working memory change according to the task context.

    Science.gov (United States)

    Balaban, Halely; Luria, Roy

    2016-08-01

    This study investigated whether an item's representation in visual working memory (VWM) can be updated according to changes in the global task context. We used a modified change detection paradigm, in which the items moved before the retention interval. In all of the experiments, we presented identical color-color conjunction items that were arranged to provide a common fate Gestalt grouping cue during their movement. Task context was manipulated by adding a condition highlighting either the integrated interpretation of the conjunction items or their individuated interpretation. We monitored the contralateral delay activity (CDA) as an online marker of VWM. Experiment 1 employed only a minimal global context; the conjunction items were integrated during their movement, but then were partially individuated, at a late stage of the retention interval. The same conjunction items were perfectly integrated in an integration context (Experiment 2). An individuation context successfully produced strong individuation, already during the movement, overriding Gestalt grouping cues (Experiment 3). In Experiment 4, a short priming of the individuation context managed to individuate the conjunction items immediately after the Gestalt cue was no longer available. Thus, the representations of identical items changed according to the task context, suggesting that VWM interprets incoming input according to global factors which can override perceptual cues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. The effects of stimulus modality and task integrality: Predicting dual-task performance and workload from single-task levels

    Science.gov (United States)

    Hart, S. G.; Shively, R. J.; Vidulich, M. A.; Miller, R. C.

    1986-01-01

    The influence of stimulus modality and task difficulty on workload and performance was investigated. The goal was to quantify the cost (in terms of response time and experienced workload) incurred when essentially serial task components shared common elements (e.g., the response to one initiated the other) which could be accomplished in parallel. The experimental tasks were based on the Fittsberg paradigm; the solution to a SternBERG-type memory task determines which of two identical FITTS targets are acquired. Previous research suggested that such functionally integrated dual tasks are performed with substantially less workload and faster response times than would be predicted by suming single-task components when both are presented in the same stimulus modality (visual). The physical integration of task elements was varied (although their functional relationship remained the same) to determine whether dual-task facilitation would persist if task components were presented in different sensory modalities. Again, it was found that the cost of performing the two-stage task was considerably less than the sum of component single-task levels when both were presented visually. Less facilitation was found when task elements were presented in different sensory modalities. These results suggest the importance of distinguishing between concurrent tasks that complete for limited resources from those that beneficially share common resources when selecting the stimulus modalities for information displays.

  2. A common source of attention for auditory and visual tracking.

    Science.gov (United States)

    Fougnie, Daryl; Cockhren, Jurnell; Marois, René

    2018-05-01

    Tasks that require tracking visual information reveal the severe limitations of our capacity to attend to multiple objects that vary in time and space. Although these limitations have been extensively characterized in the visual domain, very little is known about tracking information in other sensory domains. Does tracking auditory information exhibit characteristics similar to those of tracking visual information, and to what extent do these two tracking tasks draw on the same attention resources? We addressed these questions by asking participants to perform either single or dual tracking tasks from the same (visual-visual) or different (visual-auditory) perceptual modalities, with the difficulty of the tracking tasks being manipulated across trials. The results revealed that performing two concurrent tracking tasks, whether they were in the same or different modalities, affected tracking performance as compared to performing each task alone (concurrence costs). Moreover, increasing task difficulty also led to increased costs in both the single-task and dual-task conditions (load-dependent costs). The comparison of concurrence costs between visual-visual and visual-auditory dual-task performance revealed slightly greater interference when two visual tracking tasks were paired. Interestingly, however, increasing task difficulty led to equivalent costs for visual-visual and visual-auditory pairings. We concluded that visual and auditory tracking draw largely, though not exclusively, on common central attentional resources.

  3. Task-irrelevant expectation violations in sequential manual actions: Evidence for a “check-after-surprise” mode of visual attention and eye-hand decoupling

    Directory of Open Access Journals (Sweden)

    Rebecca Martina Foerster

    2016-11-01

    Full Text Available When performing sequential manual actions (e.g., cooking, visual information is prioritized according to the task determining where and when to attend, look, and act. In well-practiced sequential actions, long-term memory (LTM-based expectations specify which action targets might be found where and when. We have previously demonstrated (Foerster and Schneider, 2015b that violations of such expectations that are task-relevant (e.g., target location change cause a regression from a memory-based mode of attentional selection to visual search. How might task-irrelevant expectation violations in such well-practiced sequential manual actions modify attentional selection? This question was investigated by a computerized version of the number-connection test. Participants clicked on nine spatially-distributed numbered target circles in ascending order while eye movements were recorded as proxy for covert attention. Target’s visual features and locations stayed constant for 65 prechange-trials, allowing practicing the manual action sequence. Consecutively, a task-irrelevant expectation violation occurred and stayed for 20 change-trials. Specifically, action target number 4 appeared in a different font. In 15 reversion-trials, number 4 returned to the original font. During the first task-irrelevant change trial, manual clicking was slower and eye scanpaths were larger and contained more fixations. The additional fixations were mainly checking fixations on the changed target while acting on later targets. Whereas the eyes repeatedly revisited the task-irrelevant change, cursor-paths remained completely unaffected. Effects lasted for 2-3 change trials and did not reappear during reversion. In conclusion, an unexpected task-irrelevant change on a task-defining feature of a well-practiced manual sequence leads to eye-hand decoupling and a check-after-surprise mode of attentional selection.

  4. Visual search elicits the electrophysiological marker of visual working memory.

    Directory of Open Access Journals (Sweden)

    Stephen M Emrich

    Full Text Available BACKGROUND: Although limited in capacity, visual working memory (VWM plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA, which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. METHODOLOGY/PRINCIPAL FINDINGS: The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. CONCLUSIONS/SIGNIFICANCE: We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors.

  5. Postural Control Can Be Well Maintained by Healthy, Young Adults in Difficult Visual Task, Even in Sway-Referenced Dynamic Conditions.

    Science.gov (United States)

    Lions, Cynthia; Bucci, Maria Pia; Bonnet, Cédrick

    2016-01-01

    To challenge the validity of existing cognitive models of postural control, we recorded eye movements and postural sway during two visual tasks (a control free-viewing task and a difficult searching task), and two postural tasks (one static task in which the platform was maintained stable and a dynamic task in which the platform moved in a sway-referenced manner.) We expected these models to be insufficient to predict the results in postural control both in static-as already shown in the literature reports-and in dynamic platform conditions. Twelve healthy, young adults (17.3 to 34.1 years old) participated in this study. Postural performances were evaluated using the Multitest platform (Framiral®) and ocular recording was performed with Mobile T2 (e(ye)BRAIN®). In the free-viewing task, the participants had to look at an image, without any specific instruction. In the searching task, the participants had to look at an image and also to locate the position of an object in the scene. Postural sway was only significantly higher in the dynamic free-viewing condition than in the three other conditions with no significant difference between these three other conditions. Visual task performance was slightly higher in dynamic than in static conditions. As expected, our results did not confirm the main assumption of the current cognitive models of postural control-i.e. that the limited attentional resources of the brain should explain changes in postural control in our conditions. Indeed, 1) the participants did not sway significantly more in the sway-referenced dynamic searching condition than in any other condition; 2) the participants swayed significantly less in both static and dynamic searching conditions than in the dynamic free-viewing condition. We suggest that a new cognitive model illustrating the adaptive, functional role of the brain to control upright stance is necessary for future studies.

  6. Postural Control Can Be Well Maintained by Healthy, Young Adults in Difficult Visual Task, Even in Sway-Referenced Dynamic Conditions.

    Directory of Open Access Journals (Sweden)

    Cynthia Lions

    Full Text Available To challenge the validity of existing cognitive models of postural control, we recorded eye movements and postural sway during two visual tasks (a control free-viewing task and a difficult searching task, and two postural tasks (one static task in which the platform was maintained stable and a dynamic task in which the platform moved in a sway-referenced manner. We expected these models to be insufficient to predict the results in postural control both in static-as already shown in the literature reports-and in dynamic platform conditions.Twelve healthy, young adults (17.3 to 34.1 years old participated in this study. Postural performances were evaluated using the Multitest platform (Framiral® and ocular recording was performed with Mobile T2 (e(yeBRAIN®. In the free-viewing task, the participants had to look at an image, without any specific instruction. In the searching task, the participants had to look at an image and also to locate the position of an object in the scene.Postural sway was only significantly higher in the dynamic free-viewing condition than in the three other conditions with no significant difference between these three other conditions. Visual task performance was slightly higher in dynamic than in static conditions.As expected, our results did not confirm the main assumption of the current cognitive models of postural control-i.e. that the limited attentional resources of the brain should explain changes in postural control in our conditions. Indeed, 1 the participants did not sway significantly more in the sway-referenced dynamic searching condition than in any other condition; 2 the participants swayed significantly less in both static and dynamic searching conditions than in the dynamic free-viewing condition. We suggest that a new cognitive model illustrating the adaptive, functional role of the brain to control upright stance is necessary for future studies.

  7. Dynamic spatial coding within the dorsal frontoparietal network during a visual search task.

    Directory of Open Access Journals (Sweden)

    Wieland H Sommer

    Full Text Available To what extent are the left and right visual hemifields spatially coded in the dorsal frontoparietal attention network? In many experiments with neglect patients, the left hemisphere shows a contralateral hemifield preference, whereas the right hemisphere represents both hemifields. This pattern of spatial coding is often used to explain the right-hemispheric dominance of lesions causing hemispatial neglect. However, pathophysiological mechanisms of hemispatial neglect are controversial because recent experiments on healthy subjects produced conflicting results regarding the spatial coding of visual hemifields. We used an fMRI paradigm that allowed us to distinguish two attentional subprocesses during a visual search task. Either within the left or right hemifield subjects first attended to stationary locations (spatial orienting and then shifted their attentional focus to search for a target line. Dynamic changes in spatial coding of the left and right hemifields were observed within subregions of the dorsal front-parietal network: During stationary spatial orienting, we found the well-known spatial pattern described above, with a bilateral hemifield representation in the right hemisphere and a contralateral preference in the left hemisphere. However, during search, the right hemisphere had a contralateral preference and the left hemisphere equally represented both hemifields. This finding leads to novel perspectives regarding models of visuospatial attention and hemispatial neglect.

  8. Seeing without knowing: task relevance dissociates between visual awareness and recognition.

    Science.gov (United States)

    Eitam, Baruch; Shoval, Roy; Yeshurun, Yaffa

    2015-03-01

    We demonstrate that task relevance dissociates between visual awareness and knowledge activation to create a state of seeing without knowing-visual awareness of familiar stimuli without recognizing them. We rely on the fact that in order to experience a Kanizsa illusion, participants must be aware of its inducers. While people can indicate the orientation of the illusory rectangle with great ease (signifying that they have consciously experienced the illusion's inducers), almost 30% of them could not report the inducers' color. Thus, people can see, in the sense of phenomenally experiencing, but not know, in the sense of recognizing what the object is or activating appropriate knowledge about it. Experiment 2 tests whether relevance-based selection operates within objects and shows that, contrary to the pattern of results found with features of different objects in our previous studies and replicated in Experiment 1, selection does not occur when both relevant and irrelevant features belong to the same object. We discuss these findings in relation to the existing theories of consciousness and to attention and inattentional blindness, and the role of cognitive load, object-based attention, and the use of self-reports as measures of awareness. © 2015 New York Academy of Sciences.

  9. Competition between auditory and visual spatial cues during visual task performance

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.

    2009-01-01

    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is

  10. Simulator study of the effect of visual-motion time delays on pilot tracking performance with an audio side task

    Science.gov (United States)

    Riley, D. R.; Miller, G. K., Jr.

    1978-01-01

    The effect of time delay was determined in the visual and motion cues in a flight simulator on pilot performance in tracking a target aircraft that was oscillating sinusoidally in altitude only. An audio side task was used to assure the subject was fully occupied at all times. The results indicate that, within the test grid employed, about the same acceptable time delay (250 msec) was obtained for a single aircraft (fighter type) by each of two subjects for both fixed-base and motion-base conditions. Acceptable time delay is defined as the largest amount of delay that can be inserted simultaneously into the visual and motion cues before performance degradation occurs. A statistical analysis of the data was made to establish this value of time delay. Audio side task provided quantitative data that documented the subject's work level.

  11. Inhibition in movement plan competition: reach trajectories curve away from remembered and task-irrelevant present but not from task-irrelevant past visual stimuli.

    Science.gov (United States)

    Moehler, Tobias; Fiehler, Katja

    2017-11-01

    The current study investigated the role of automatic encoding and maintenance of remembered, past, and present visual distractors for reach movement planning. The previous research on eye movements showed that saccades curve away from locations actively kept in working memory and also from task-irrelevant perceptually present visual distractors, but not from task-irrelevant past distractors. Curvature away has been associated with an inhibitory mechanism resolving the competition between multiple active movement plans. Here, we examined whether reach movements underlie a similar inhibitory mechanism and thus show systematic modulation of reach trajectories when the location of a previously presented distractor has to be (a) maintained in working memory or (b) ignored, or (c) when the distractor is perceptually present. Participants performed vertical reach movements on a computer monitor from a home to a target location. Distractors appeared laterally and near or far from the target (equidistant from central fixation). We found that reaches curved away from the distractors located close to the target when the distractor location had to be memorized and when it was perceptually present, but not when the past distractor had to be ignored. Our findings suggest that automatically encoding present distractors and actively maintaining the location of past distractors in working memory evoke a similar response competition resolved by inhibition, as has been previously shown for saccadic eye movements.

  12. Task-specific reorganization of the auditory cortex in deaf humans.

    Science.gov (United States)

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  13. Direct and indirect effects of attention and visual function on gait impairment in Parkinson's disease: influence of task and turning.

    Science.gov (United States)

    Stuart, Samuel; Galna, Brook; Delicato, Louise S; Lord, Sue; Rochester, Lynn

    2017-07-01

    Gait impairment is a core feature of Parkinson's disease (PD) which has been linked to cognitive and visual deficits, but interactions between these features are poorly understood. Monitoring saccades allows investigation of real-time cognitive and visual processes and their impact on gait when walking. This study explored: (i) saccade frequency when walking under different attentional manipulations of turning and dual-task; and (ii) direct and indirect relationships between saccades, gait impairment, vision and attention. Saccade frequency (number of fast eye movements per-second) was measured during gait in 60 PD and 40 age-matched control participants using a mobile eye-tracker. Saccade frequency was significantly reduced in PD compared to controls during all conditions. However, saccade frequency increased with a turn and decreased under dual-task for both groups. Poorer attention directly related to saccade frequency, visual function and gait impairment in PD, but not controls. Saccade frequency did not directly relate to gait in PD, but did in controls. Instead, saccade frequency and visual function deficit indirectly impacted gait impairment in PD, which was underpinned by their relationship with attention. In conclusion, our results suggest a vital role for attention with direct and indirect influences on gait impairment in PD. Attention directly impacted saccade frequency, visual function and gait impairment in PD, with connotations for falls. It also underpinned indirect impact of visual and saccadic impairment on gait. Attention therefore represents a key therapeutic target that should be considered in future research. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. An Empirical Study on Using Visual Embellishments in Visualization.

    Science.gov (United States)

    Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min

    2012-12-01

    In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.

  15. Reduced dual-task gait speed is associated with visual Go/No-Go brain network activation in children and adolescents with concussion.

    Science.gov (United States)

    Howell, David R; Meehan, William P; Barber Foss, Kim D; Reches, Amit; Weiss, Michal; Myer, Gregory D

    2018-05-31

    To investigate the association between dual-task gait performance and brain network activation (BNA) using an electroencephalography (EEG)-based Go/No-Go paradigm among children and adolescents with concussion. Participants with a concussion completed a visual Go/No-Go task with collection of electroencephalogram brain activity. Data were treated with BNA analysis, which involves an algorithmic approach to EEG-ERP activation quantification. Participants also completed a dual-task gait assessment. The relationship between dual-task gait speed and BNA was assessed using multiple linear regression models. Participants (n = 20, 13.9 ± 2.3 years of age, 50% female) were tested at a mean of 7.0 ± 2.5 days post-concussion and were symptomatic at the time of testing (post-concussion symptom scale = 40.4 ± 21.9). Slower dual-task average gait speed (mean = 82.2 ± 21.0 cm/s) was significantly associated with lower relative time BNA scores (mean = 39.6 ± 25.8) during the No-Go task (β = 0.599, 95% CI = 0.214, 0.985, p = 0.005, R 2  = 0.405), while controlling for the effect of age and gender. Among children and adolescents with a concussion, slower dual-task gait speed was independently associated with lower BNA relative time scores during a visual Go/No-Go task. The relationship between abnormal gait behaviour and brain activation deficits may be reflective of disruption to multiple functional abilities after concussion.

  16. Understanding and Visualizing Multitasking and Task Switching Activities: A Time Motion Study to Capture Nursing Workflow.

    Science.gov (United States)

    Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L; Migliore, Elaina M; Chipps, Esther M; Buck, Jacalyn

    2016-01-01

    A fundamental understanding of multitasking within nursing workflow is important in today's dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives.

  17. The sensory strength of voluntary visual imagery predicts visual working memory capacity.

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2014-10-09

    How much we can actively hold in mind is severely limited and differs greatly from one person to the next. Why some individuals have greater capacities than others is largely unknown. Here, we investigated why such large variations in visual working memory (VWM) capacity might occur, by examining the relationship between visual working memory and visual mental imagery. To assess visual working memory capacity participants were required to remember the orientation of a number of Gabor patches and make subsequent judgments about relative changes in orientation. The sensory strength of voluntary imagery was measured using a previously documented binocular rivalry paradigm. Participants with greater imagery strength also had greater visual working memory capacity. However, they were no better on a verbal number working memory task. Introducing a uniform luminous background during the retention interval of the visual working memory task reduced memory capacity, but only for those with strong imagery. Likewise, for the good imagers increasing background luminance during imagery generation reduced its effect on subsequent binocular rivalry. Luminance increases did not affect any of the subgroups on the verbal number working memory task. Together, these results suggest that luminance was disrupting sensory mechanisms common to both visual working memory and imagery, and not a general working memory system. The disruptive selectivity of background luminance suggests that good imagers, unlike moderate or poor imagers, may use imagery as a mnemonic strategy to perform the visual working memory task. © 2014 ARVO.

  18. STATIC AND DYNAMIC POSTURE CONTROL IN POSTLINGUAL COCHLEAR IMPLANTED PATIENTS: Effects of dual-tasking, visual and auditory inputs suppression

    Directory of Open Access Journals (Sweden)

    BERNARD DEMANZE eLaurence

    2014-01-01

    Full Text Available Posture control is based on central integration of multisensory inputs, and on internal representation of body orientation in space. This multisensory feedback regulates posture control and continuously updates the internal model of body’s position which in turn forwards motor commands adapted to the environmental context and constraints. The peripheral localization of the vestibular system, close to the cochlea, makes vestibular damage possible following cochlear implant (CI surgery. Impaired vestibular function in CI patients, if any, may have a strong impact on posture stability. The simple postural task of quiet standing is generally paired with cognitive activity in most day life conditions, leading therefore to competition for attentional resources in dual-tasking, and increased risk of fall particularly in patients with impaired vestibular function. This study was aimed at evaluating the effects of post-lingual cochlear implantation on posture control in adult deaf patients. Possible impairment of vestibular function was assessed by comparing the postural performance of patients to that of age-matched healthy subjects during a simple postural task performed in static and dynamic conditions, and during dual-tasking with a visual or auditory memory task. Postural tests were done in eyes open (EO and eyes closed (EC conditions, with the cochlear implant activated (ON or not (OFF. Results showed that the CI patients significantly reduced limits of stability and increased postural instability in static conditions. In dynamic conditions, they spent considerably more energy to maintain equilibrium, and their head was stabilized neither in space nor on trunk while the controls showed a whole body rigidification strategy. Hearing (prosthesis on as well as dual-tasking did not really improve the dynamic postural performance of the CI patients. We conclude that CI patients become strongly visual dependent mainly in challenging postural conditions.

  19. Dynamic visual noise reduces confidence in short-term memory for visual information.

    Science.gov (United States)

    Kemps, Eva; Andrade, Jackie

    2012-05-01

    Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

  20. Pitch contour impairment in congenital amusia: New insights from the Self-paced Audio-visual Contour Task (SACT).

    Science.gov (United States)

    Lu, Xuejing; Sun, Yanan; Ho, Hao Tam; Thompson, William Forde

    2017-01-01

    Individuals with congenital amusia usually exhibit impairments in melodic contour processing when asked to compare pairs of melodies that may or may not be identical to one another. However, it is unclear whether the impairment observed in contour processing is caused by an impairment of pitch discrimination, or is a consequence of poor pitch memory. To help resolve this ambiguity, we designed a novel Self-paced Audio-visual Contour Task (SACT) that evaluates sensitivity to contour while placing minimal burden on memory. In this task, participants control the pace of an auditory contour that is simultaneously accompanied by a visual contour, and they are asked to judge whether the two contours are congruent or incongruent. In Experiment 1, melodic contours varying in pitch were presented with a series of dots that varied in spatial height. Amusics exhibited reduced sensitivity to audio-visual congruency in comparison to control participants. To exclude the possibility that the impairment arises from a general deficit in cross-modal mapping, Experiment 2 examined sensitivity to cross-modal mapping for two other auditory dimensions: timbral brightness and loudness. Amusics and controls were significantly more sensitive to large than small contour changes, and to changes in loudness than changes in timbre. However, there were no group differences in cross-modal mapping, suggesting that individuals with congenital amusia can comprehend spatial representations of acoustic information. Taken together, the findings indicate that pitch contour processing in congenital amusia remains impaired even when pitch memory is relatively unburdened.

  1. Visual Working Memory Capacity and Proactive Interference

    OpenAIRE

    Hartshorne, Joshua

    2008-01-01

    BACKGROUND: Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. METHODOLOGY/P...

  2. Age-related slowing of response selection and production in a visual choice reaction time task

    Directory of Open Access Journals (Sweden)

    David L Woods

    2015-04-01

    Full Text Available Aging is associated with delayed processing in choice reaction time (CRT tasks, but the processing stages most impacted by aging have not been clearly identified. Here, we analyzed CRT latencies in a computerized serial visual feature-conjunction task. Participants responded to a target letter (probability 40% by pressing one mouse button, and responded to distractor letters differing either in color, shape, or both features from the target (probabilities 20% each, by pressing the other mouse button. Stimuli were presented randomly to the left and right visual fields and stimulus onset asynchronies (SOAs were adaptively reduced following correct responses using a staircase procedure. In Experiment 1, we tested 1466 participants who ranged in age from 18 to 65 years. CRT latencies increased significantly with age (r = 0.47, 2.80 ms/year. Central processing time (CPT, isolated by subtracting simple reaction times (obtained in a companion experiment performed on the same day from CRT latencies, accounted for more than 80% of age-related CRT slowing, with most of the remaining increase in latency due to slowed motor responses. Participants were faster and more accurate when the stimulus location was spatially compatible with the mouse button used for responding, and this effect increased slightly with age. Participants took longer to respond to distractors with target color or shape than to distractors with no target features. However, the additional time needed to discriminate the more target-like distractors did not increase with age. In Experiment 2, we replicated the findings of Experiment 1 in a second population of 178 participants (ages 18-82 years. CRT latencies did not differ significantly in the two experiments, and similar effects of age, distractor similarity, and stimulus-response spatial compatibility were found. The results suggest that the age-related slowing in visual CRT latencies is largely due to delays in response selection and

  3. Task-specific impairments and enhancements induced by magnetic stimulation of human visual area V5.

    OpenAIRE

    Walsh, V; Ellison, A; Battelli, L; Cowey, A

    1998-01-01

    Transcranial magnetic stimulation (TMS) can be used to simulate the effects of highly circumscribed brain damage permanently present in some neuropsychological patients, by reversibly disrupting the normal functioning of the cortical area to which it is applied. By using TMS we attempted to recreate deficits similar to those reported in a motion-blind patient and to assess the specificity of deficits when TMS is applied over human area V5. We used six visual search tasks and showed that subje...

  4. The Effects of Task Clarification, Visual Prompts, and Graphic Feedback on Customer Greeting and Up-Selling in a Restaurant

    Science.gov (United States)

    Squires, James; Wilder, David A.; Fixsen, Amanda; Hess, Erica; Rost, Kristen; Curran, Ryan; Zonneveld, Kimberly

    2007-01-01

    An intervention consisting of task clarification, visual prompts, and graphic feedback was evaluated to increase customer greeting and up-selling in a restaurant. A combination multiple baseline and reversal design was used to evaluate intervention effects. Although all interventions improved performance over baseline, the delivery of graphic…

  5. Designing and Evaluation of Reliability and Validity of Visual Cue-Induced Craving Assessment Task for Methamphetamine Smokers

    Directory of Open Access Journals (Sweden)

    Hamed Ekhtiari

    2010-08-01

    Full Text Available A B S T R A C TIntroduction: Craving to methamphetamine is a significant health concern and exposure to methamphetamine cues in laboratory can induce craving. In this study, a task designing procedure for evaluating methamphetamine cue-induced craving in laboratory conditions is examined. Methods: First a series of visual cues which could induce craving was identified by 5 discussion sessions between expert clinicians and 10 methamphetamine smokers. Cues were categorized in 4 main clusters and photos were taken for each cue in studio, then 60 most evocative photos were selected and 10 neutral photos were added. In this phase, 50 subjects with methamphetamine dependence, had exposure to cues and rated craving intensity induced by the 72 cues (60 active evocative photos + 10 neutral photos on self report Visual Analogue Scale (ranging from 0-100. In this way, 50 photos with high levels of evocative potency (CICT 50 and 10 photos with the most evocative potency (CICT 10 were obtained and subsequently, the task was designed. Results: The task reliability (internal consistency was measured by Cronbach’s alpha which was 91% for (CICT 50 and 71% for (CICT 10. The most craving induced was reported for category Drug use procedure (66.27±30.32 and least report for category Cues associated with drug use (31.38±32.96. Difference in cue-induced craving in (CICT 50 and (CICT 10 were not associated with age, education, income, marital status, employment and sexual activity in the past 30 days prior to study entry. Family living condition was marginally correlated with higher scores in (CICT 50. Age of onset for (opioids, cocaine and methamphetamine was negatively correlated with (CICT 50 and (CICT 10 and age of first opiate use was negatively correlated with (CICT 50. Discussion: Cue-induced craving for methamphetamine may be reliably measured by tasks designed in laboratory and designed assessment tasks can be used in cue reactivity paradigm, and

  6. Specialization in the default mode: Task-induced brain deactivations dissociate between visual working memory and attention.

    Science.gov (United States)

    Mayer, Jutta S; Roebroeck, Alard; Maurer, Konrad; Linden, David E J

    2010-01-01

    The idea of an organized mode of brain function that is present as default state and suspended during goal-directed behaviors has recently gained much interest in the study of human brain function. The default mode hypothesis is based on the repeated observation that certain brain areas show task-induced deactivations across a wide range of cognitive tasks. In this event-related functional resonance imaging study we tested the default mode hypothesis by comparing common and selective patterns of BOLD deactivation in response to the demands on visual attention and working memory (WM) that were independently modulated within one task. The results revealed task-induced deactivations within regions of the default mode network (DMN) with a segregation of areas that were additively deactivated by an increase in the demands on both attention and WM, and areas that were selectively deactivated by either high attentional demand or WM load. Attention-selective deactivations appeared in the left ventrolateral and medial prefrontal cortex and the left lateral temporal cortex. Conversely, WM-selective deactivations were found predominantly in the right hemisphere including the medial-parietal, the lateral temporo-parietal, and the medial prefrontal cortex. Moreover, during WM encoding deactivated regions showed task-specific functional connectivity. These findings demonstrate that task-induced deactivations within parts of the DMN depend on the specific characteristics of the attention and WM components of the task. The DMN can thus be subdivided into a set of brain regions that deactivate indiscriminately in response to cognitive demand ("the core DMN") and a part whose deactivation depends on the specific task. 2009 Wiley-Liss, Inc.

  7. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    Science.gov (United States)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind

  8. Testing visual short-term memory of pigeons (Columba livia) and a rhesus monkey (Macaca mulatta) with a location change detection task.

    Science.gov (United States)

    Leising, Kenneth J; Elmore, L Caitlin; Rivera, Jacquelyne J; Magnotti, John F; Katz, Jeffrey S; Wright, Anthony A

    2013-09-01

    Change detection is commonly used to assess capacity (number of objects) of human visual short-term memory (VSTM). Comparisons with the performance of non-human animals completing similar tasks have shown similarities and differences in object-based VSTM, which is only one aspect ("what") of memory. Another important aspect of memory, which has received less attention, is spatial short-term memory for "where" an object is in space. In this article, we show for the first time that a monkey and pigeons can be accurately trained to identify location changes, much as humans do, in change detection tasks similar to those used to test object capacity of VSTM. The subject's task was to identify (touch/peck) an item that changed location across a brief delay. Both the monkey and pigeons showed transfer to delays longer than the training delay, to greater and smaller distance changes than in training, and to novel colors. These results are the first to demonstrate location-change detection in any non-human species and encourage comparative investigations into the nature of spatial and visual short-term memory.

  9. Opposite brain laterality in analogous auditory and visual tests.

    Science.gov (United States)

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  10. Insensitivity of visual short-term memory to irrelevant visual information.

    Science.gov (United States)

    Andrade, Jackie; Kemps, Eva; Werniers, Yves; May, Jon; Szmalec, Arnaud

    2002-07-01

    Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment I replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.

  11. The development of organized visual search

    Science.gov (United States)

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  12. Developing Tests of Visual Dependency

    Science.gov (United States)

    Kindrat, Alexandra N.

    2011-01-01

    Astronauts develop neural adaptive responses to microgravity during space flight. Consequently these adaptive responses cause maladaptive disturbances in balance and gait function when astronauts return to Earth and are re-exposed to gravity. Current research in the Neuroscience Laboratories at NASA-JSC is focused on understanding how exposure to space flight produces post-flight disturbances in balance and gait control and developing training programs designed to facilitate the rapid recovery of functional mobility after space flight. In concert with these disturbances, astronauts also often report an increase in their visual dependency during space flight. To better understand this phenomenon, studies were conducted with specially designed training programs focusing on visual dependency with the aim to understand and enhance subjects ability to rapidly adapt to novel sensory situations. The Rod and Frame test (RFT) was used first to assess an individual s visual dependency, using a variety of testing techniques. Once assessed, subjects were asked to perform two novel tasks under transformation (both the Pegboard and Cube Construction tasks). Results indicate that head position cues and initial visual test conditions had no effect on an individual s visual dependency scores. Subjects were also able to adapt to the manual tasks after several trials. Individual visual dependency correlated with ability to adapt manual to a novel visual distortion only for the cube task. Subjects with higher visual dependency showed decreased ability to adapt to this task. Ultimately, it was revealed that the RFT may serve as an effective prediction tool to produce individualized adaptability training prescriptions that target the specific sensory profile of each crewmember.

  13. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks.

    Science.gov (United States)

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P

    2017-01-07

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  14. Pitch contour impairment in congenital amusia: New insights from the Self-paced Audio-visual Contour Task (SACT.

    Directory of Open Access Journals (Sweden)

    Xuejing Lu

    Full Text Available Individuals with congenital amusia usually exhibit impairments in melodic contour processing when asked to compare pairs of melodies that may or may not be identical to one another. However, it is unclear whether the impairment observed in contour processing is caused by an impairment of pitch discrimination, or is a consequence of poor pitch memory. To help resolve this ambiguity, we designed a novel Self-paced Audio-visual Contour Task (SACT that evaluates sensitivity to contour while placing minimal burden on memory. In this task, participants control the pace of an auditory contour that is simultaneously accompanied by a visual contour, and they are asked to judge whether the two contours are congruent or incongruent. In Experiment 1, melodic contours varying in pitch were presented with a series of dots that varied in spatial height. Amusics exhibited reduced sensitivity to audio-visual congruency in comparison to control participants. To exclude the possibility that the impairment arises from a general deficit in cross-modal mapping, Experiment 2 examined sensitivity to cross-modal mapping for two other auditory dimensions: timbral brightness and loudness. Amusics and controls were significantly more sensitive to large than small contour changes, and to changes in loudness than changes in timbre. However, there were no group differences in cross-modal mapping, suggesting that individuals with congenital amusia can comprehend spatial representations of acoustic information. Taken together, the findings indicate that pitch contour processing in congenital amusia remains impaired even when pitch memory is relatively unburdened.

  15. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB

    Science.gov (United States)

    Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven

    2013-01-01

    Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363

  16. Irrelevant Auditory and Visual Events Induce a Visual Attentional Blink

    NARCIS (Netherlands)

    Van der Burg, Erik; Nieuwenstein, Mark R.; Theeuwes, Jan; Olivers, Christian N. L.

    2013-01-01

    In the present study we investigated whether a task-irrelevant distractor can induce a visual attentional blink pattern. Participants were asked to detect only a visual target letter (A, B, or C) and to ignore the preceding auditory, visual, or audiovisual distractor. An attentional blink was

  17. Encoding and immediate retrieval tasks in patients with epilepsy: A functional MRI study of verbal and visual memory.

    Science.gov (United States)

    Saddiki, Najat; Hennion, Sophie; Viard, Romain; Ramdane, Nassima; Lopes, Renaud; Baroncini, Marc; Szurhaj, William; Reyns, Nicolas; Pruvo, Jean Pierre; Delmaire, Christine

    2018-05-01

    Medial lobe temporal structures and more specifically the hippocampus play a decisive role in episodic memory. Most of the memory functional magnetic resonance imaging (fMRI) studies evaluate the encoding phase; the retrieval phase being performed outside the MRI. We aimed to determine the ability to reveal greater hippocampal fMRI activations during retrieval phase. Thirty-five epileptic patients underwent a two-step memory fMRI. During encoding phase, subjects were requested to identify the feminine or masculine gender of faces and words presented, in order to encourage stimulus encoding. One hour after, during retrieval phase, subjects had to recognize the word and face. We used an event-related design to identify hippocampal activations. There was no significant difference between patients with left temporal lobe epilepsy, patients with right temporal lobe epilepsy and patients with extratemporal lobe epilepsy on verbal and visual learning task. For words, patients demonstrated significantly more bilateral hippocampal activation for retrieval task than encoding task and when the tasks were associated than during encoding alone. Significant difference was seen between face-encoding alone and face retrieval alone. This study demonstrates the essential contribution of the retrieval task during a fMRI memory task but the number of patients with hippocampal activations was greater when the two tasks were taken into account. Copyright © 2018. Published by Elsevier Masson SAS.

  18. Static and dynamic posture control in postlingual cochlear implanted patients: effects of dual-tasking, visual and auditory inputs suppression.

    Science.gov (United States)

    Bernard-Demanze, Laurence; Léonard, Jacques; Dumitrescu, Michel; Meller, Renaud; Magnan, Jacques; Lacour, Michel

    2013-01-01

    Posture control is based on central integration of multisensory inputs, and on internal representation of body orientation in space. This multisensory feedback regulates posture control and continuously updates the internal model of body's position which in turn forwards motor commands adapted to the environmental context and constraints. The peripheral localization of the vestibular system, close to the cochlea, makes vestibular damage possible following cochlear implant (CI) surgery. Impaired vestibular function in CI patients, if any, may have a strong impact on posture stability. The simple postural task of quiet standing is generally paired with cognitive activity in most day life conditions, leading therefore to competition for attentional resources in dual-tasking, and increased risk of fall particularly in patients with impaired vestibular function. This study was aimed at evaluating the effects of postlingual cochlear implantation on posture control in adult deaf patients. Possible impairment of vestibular function was assessed by comparing the postural performance of patients to that of age-matched healthy subjects during a simple postural task performed in static (stable platform) and dynamic (platform in translation) conditions, and during dual-tasking with a visual or auditory memory task. Postural tests were done in eyes open (EO) and eyes closed (EC) conditions, with the CI activated (ON) or not (OFF). Results showed that the postural performance of the CI patients strongly differed from the controls, mainly in the EC condition. The CI patients showed significantly reduced limits of stability and increased postural instability in static conditions. In dynamic conditions, they spent considerably more energy to maintain equilibrium, and their head was stabilized neither in space nor on trunk: they behaved dynamically without vision like an inverted pendulum while the controls showed a whole body rigidification strategy. Hearing (prosthesis on) as well

  19. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    Science.gov (United States)

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  20. Visual mismatch negativity indicates automatic, task-independent detection of artistic image composition in abstract artworks.

    Science.gov (United States)

    Menzel, Claudia; Kovács, Gyula; Amado, Catarina; Hayn-Leichsenring, Gregor U; Redies, Christoph

    2018-05-06

    In complex abstract art, image composition (i.e., the artist's deliberate arrangement of pictorial elements) is an important aesthetic feature. We investigated whether the human brain detects image composition in abstract artworks automatically (i.e., independently of the experimental task). To this aim, we studied whether a group of 20 original artworks elicited a visual mismatch negativity when contrasted with a group of 20 images that were composed of the same pictorial elements as the originals, but in shuffled arrangements, which destroy artistic composition. We used a passive oddball paradigm with parallel electroencephalogram recordings to investigate the detection of image type-specific properties. We observed significant deviant-standard differences for the shuffled and original images, respectively. Furthermore, for both types of images, differences in amplitudes correlated with the behavioral ratings of the images. In conclusion, we show that the human brain can detect composition-related image properties in visual artworks in an automatic fashion. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Selective attention to task-irrelevant emotional distractors is unaffected by the perceptual load associated with a foreground task.

    Science.gov (United States)

    Hindi Attar, Catherine; Müller, Matthias M

    2012-01-01

    A number of studies have shown that emotionally arousing stimuli are preferentially processed in the human brain. Whether or not this preference persists under increased perceptual load associated with a task at hand remains an open question. Here we manipulated two possible determinants of the attentional selection process, perceptual load associated with a foreground task and the emotional valence of concurrently presented task-irrelevant distractors. As a direct measure of sustained attentional resource allocation in early visual cortex we used steady-state visual evoked potentials (SSVEPs) elicited by distinct flicker frequencies of task and distractor stimuli. Subjects either performed a detection (low load) or discrimination (high load) task at a centrally presented symbol stream that flickered at 8.6 Hz while task-irrelevant neutral or unpleasant pictures from the International Affective Picture System (IAPS) flickered at a frequency of 12 Hz in the background of the stream. As reflected in target detection rates and SSVEP amplitudes to both task and distractor stimuli, unpleasant relative to neutral background pictures more strongly withdrew processing resources from the foreground task. Importantly, this finding was unaffected by the factor 'load' which turned out to be a weak modulator of attentional processing in human visual cortex.

  2. Selective attention to task-irrelevant emotional distractors is unaffected by the perceptual load associated with a foreground task.

    Directory of Open Access Journals (Sweden)

    Catherine Hindi Attar

    Full Text Available A number of studies have shown that emotionally arousing stimuli are preferentially processed in the human brain. Whether or not this preference persists under increased perceptual load associated with a task at hand remains an open question. Here we manipulated two possible determinants of the attentional selection process, perceptual load associated with a foreground task and the emotional valence of concurrently presented task-irrelevant distractors. As a direct measure of sustained attentional resource allocation in early visual cortex we used steady-state visual evoked potentials (SSVEPs elicited by distinct flicker frequencies of task and distractor stimuli. Subjects either performed a detection (low load or discrimination (high load task at a centrally presented symbol stream that flickered at 8.6 Hz while task-irrelevant neutral or unpleasant pictures from the International Affective Picture System (IAPS flickered at a frequency of 12 Hz in the background of the stream. As reflected in target detection rates and SSVEP amplitudes to both task and distractor stimuli, unpleasant relative to neutral background pictures more strongly withdrew processing resources from the foreground task. Importantly, this finding was unaffected by the factor 'load' which turned out to be a weak modulator of attentional processing in human visual cortex.

  3. Selective visual attention and motivation: the consequences of value learning in an attentional blink task.

    Science.gov (United States)

    Raymond, Jane E; O'Brien, Jennifer L

    2009-08-01

    Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.

  4. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    Science.gov (United States)

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  5. Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.

    Science.gov (United States)

    Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A

    2017-03-01

    The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.

  6. Visual short-term memory guides infants' visual attention.

    Science.gov (United States)

    Mitsven, Samantha G; Cantrell, Lisa M; Luck, Steven J; Oakes, Lisa M

    2018-08-01

    Adults' visual attention is guided by the contents of visual short-term memory (VSTM). Here we asked whether 10-month-old infants' (N = 41) visual attention is also guided by the information stored in VSTM. In two experiments, we modified the one-shot change detection task (Oakes, Baumgartner, Barrett, Messenger, & Luck, 2013) to create a simplified cued visual search task to ask how information stored in VSTM influences where infants look. A single sample item (e.g., a colored circle) was presented at fixation for 500 ms, followed by a brief (300 ms) retention interval and then a test array consisting of two items, one on each side of fixation. One item in the test array matched the sample stimulus and the other did not. Infants were more likely to look at the non-matching item than at the matching item, demonstrating that the information stored rapidly in VSTM guided subsequent looking behavior. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Visual working memory capacity and proactive interference.

    Science.gov (United States)

    Hartshorne, Joshua K

    2008-07-23

    Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals.

  8. Driving context influences drivers' decision to engage in visual-manual phone tasks: Evidence from a naturalistic driving study.

    Science.gov (United States)

    Tivesten, Emma; Dozza, Marco

    2015-06-01

    Visual-manual (VM) phone tasks (i.e., texting, dialing, reading) are associated with an increased crash/near-crash risk. This study investigated how the driving context influences drivers' decisions to engage in VM phone tasks in naturalistic driving. Video-recordings of 1,432 car trips were viewed to identify VM phone tasks and passenger presence. Video, vehicle signals, and map data were used to classify driving context (i.e., curvature, other vehicles) before and during the VM phone tasks (N=374). Vehicle signals (i.e., speed, yaw rate, forward radar) were available for all driving. VM phone tasks were more likely to be initiated while standing still, and less likely while driving at high speeds, or when a passenger was present. Lead vehicle presence did not influence how likely it was that a VM phone task was initiated, but the drivers adjusted their task timing to situations when the lead vehicle was increasing speed, resulting in increasing time headway. The drivers adjusted task timing until after making sharp turns and lane change maneuvers. In contrast to previous driving simulator studies, there was no evidence of drivers reducing speed as a consequence of VM phone task engagement. The results show that experienced drivers use information about current and upcoming driving context to decide when to engage in VM phone tasks. However, drivers may fail to sufficiently increase safety margins to allow time to respond to possible unpredictable events (e.g., lead vehicle braking). Advanced driver assistance systems should facilitate and possibly boost drivers' self-regulating behavior. For instance, they might recognize when appropriate adaptive behavior is missing and advise or alert accordingly. The results from this study could also inspire training programs for novice drivers, or locally classify roads in terms of the risk associated with secondary task engagement while driving. Copyright © 2015. Published by Elsevier Ltd.

  9. Preschoolers Benefit from Visually Salient Speech Cues

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2015-01-01

    Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…

  10. Using task effort and pupil size to track covert shifts of visual attention independently of a pupillary light reflex.

    Science.gov (United States)

    Brocher, Andreas; Harbecke, Raphael; Graf, Tim; Memmert, Daniel; Hüttermann, Stefanie

    2018-03-07

    We tested the link between pupil size and the task effort involved in covert shifts of visual attention. The goal of this study was to establish pupil size as a marker of attentional shifting in the absence of luminance manipulations. In three experiments, participants evaluated two stimuli that were presented peripherally, appearing equidistant from and on opposite sides of eye fixation. The angle between eye fixation and the peripherally presented target stimuli varied from 12.5° to 42.5°. The evaluation of more distant stimuli led to poorer performance than did the evaluation of more proximal stimuli throughout our study, confirming that the former required more effort than the latter. In addition, in Experiment 1 we found that pupil size increased with increasing angle and that this effect could not be reduced to the operation of low-level visual processes in the task. In Experiment 2 the pupil dilated more strongly overall when participants evaluated the target stimuli, which required shifts of attention, than when they merely reported on the target's presence versus absence. Both conditions yielded larger pupils for more distant than for more proximal stimuli, however. In Experiment 3, we manipulated task difficulty more directly, by changing the contrast at which the target stimuli were presented. We replicated the results from Experiment 1 only with the high-contrast stimuli. With stimuli of low contrast, ceiling effects in pupil size were observed. Our data show that the link between task effort and pupil size can be used to track the degree to which an observer covertly shifts attention to or detects stimuli in peripheral vision.

  11. Strength of figure-ground activity in monkey primary visual cortex predicts saccadic reaction time in a delayed detection task.

    Science.gov (United States)

    Supèr, Hans; Lamme, Victor A F

    2007-06-01

    When and where are decisions made? In the visual system a saccade, which is a fast shift of gaze toward a target in the visual scene, is the behavioral outcome of a decision. Current neurophysiological data and reaction time models show that saccadic reaction times are determined by a build-up of activity in motor-related structures, such as the frontal eye fields. These structures depend on the sensory evidence of the stimulus. Here we use a delayed figure-ground detection task to show that late modulated activity in the visual cortex (V1) predicts saccadic reaction time. This predictive activity is part of the process of figure-ground segregation and is specific for the saccade target location. These observations indicate that sensory signals are directly involved in the decision of when and where to look.

  12. Electroencephalographic (eeg coherence between visual and motor areas of the left and the right brain hemisphere while performing visuomotor task with the right and the left hand

    Directory of Open Access Journals (Sweden)

    Simon Brežan

    2007-09-01

    Full Text Available Background: Unilateral limb movements are based on the activation of contralateral primary motor cortex and the bilateral activation of premotor cortices. Performance of a visuomotor task requires a visuomotor integration between motor and visual cortical areas. The functional integration (»binding« of different brain areas, is probably mediated by the synchronous neuronal oscillatory activity, which can be determined by electroencephalographic (EEG coherence analysis. We introduced a new method of coherence analysis and compared coherence and power spectra in the left and right hemisphere for the right vs. left hand visuomotor task, hypothesizing that the increase in coherence and decrease in power spectra while performing the task would be greater in the contralateral hemisphere.Methods: We analyzed 6 healthy subjects and recorded their electroencephalogram during visuomotor task with the right or the left hand. For data analysis, a special Matlab computer programme was designed. The results were statistically analysed by a two-way analysis of variance, one-way analysis of variance and post-hoc t-tests with Bonferroni correction.Results: We demonstrated a significant increase in coherence (p < 0.05 for the visuomotor task compared to control tasks in alpha (8–13 Hz in beta 1 (13–20 Hz frequency bands between visual and motor electrodes. There were no significant differences in coherence nor power spectra depending on the hand used. The changes of coherence and power spectra between both hemispheres were symmetrical.Conclusions: In previous studies, a specific increase of coherence and decrease of power spectra for the visuomotor task was found, but we found no conclusive asymmetries when performing the task with right vs. left hand. This could be explained in a way that increases in coherence and decreases of power spectra reflect symmetrical activation and cooperation between more complex visual and motor brain areas.

  13. Robust visual tracking via multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Ahuja, Narendra

    2012-01-01

    In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates

  14. Binocular glaucomatous visual field loss and its impact on visual exploration--a supermarket study.

    Directory of Open Access Journals (Sweden)

    Katrin Sippel

    Full Text Available Advanced glaucomatous visual field loss may critically interfere with quality of life. The purpose of this study was to (i assess the impact of binocular glaucomatous visual field loss on a supermarket search task as an example of everyday living activities, (ii to identify factors influencing the performance, and (iii to investigate the related compensatory mechanisms. Ten patients with binocular glaucoma (GP, and ten healthy-sighted control subjects (GC were asked to collect twenty different products chosen randomly in two supermarket racks as quickly as possible. The task performance was rated as "passed" or "failed" with regard to the time per correctly collected item. Based on the performance of control subjects, the threshold value for failing the task was defined as μ+3σ (in seconds per correctly collected item. Eye movements were recorded by means of a mobile eye tracker. Eight out of ten patients with glaucoma and all control subjects passed the task. Patients who failed the task needed significantly longer time (111.47 s ±12.12 s to complete the task than patients who passed (64.45 s ±13.36 s, t-test, p < 0.001. Furthermore, patients who passed the task showed a significantly higher number of glances towards the visual field defect (VFD area than patients who failed (t-test, p < 0.05. According to these results, glaucoma patients with defects in the binocular visual field display on average longer search times in a naturalistic supermarket task. However, a considerable number of patients, who compensate by frequent glancing towards the VFD, showed successful task performance. Therefore, systematic exploration of the VFD area seems to be a "time-effective" compensatory mechanism during the present supermarket task.

  15. Benefits of interhemispheric integration on the Japanese Kana script-matching tasks.

    Science.gov (United States)

    Yoshizaki, K; Tsuji, Y

    2000-02-01

    We tested Banich's hypothesis that the benefits of bihemispheric processing were enhanced as task complexity increased, when some procedural shortcomings in the previous studies were overcome by using Japanese Kana script-matching tasks. In Exp. 1, the 20 right-handed subjects were given the Physical-Identity task (Katakana-Katakana scripts matching) and the Name-Identity task (Katakana-Hiragana scripts matching). On both tasks, a pair of Kana scripts was tachistoscopically presented in the left, right, and bilateral visual fields. Distractor stimuli were also presented with target Kana scripts on both tasks to equate the processing load between the hemispheres. Analysis showed that, while a bilateral visual-field advantage was found on the name-identity task, a unilateral visual-field advantage was found on the physical-identity task, suggesting that, as the computational complexity of the encoding stage was enhanced, the benefits of bilateral hemispheric processing increased. In Exp. 2, the 16 right-handed subjects were given the same physical-identity task as in Exp. 1, except Hiragana scripts were used as distractors instead of digits to enhance task difficulty. Analysis showed no differences in performance between the unilateral and bilateral visual fields. Taking into account these results of physical-identity tasks for both Exps. 1 and 2, enhancing task demand in the stage of ignoring distractors made the unilateral visual-field advantage obtained in Exp. 1 disappear in Exp. 2. These results supported Banich's hypothesis.

  16. Similarity relations in visual search predict rapid visual categorization

    Science.gov (United States)

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  17. The modality effect of ego depletion: Auditory task modality reduces ego depletion.

    Science.gov (United States)

    Li, Qiong; Wang, Zhenhong

    2016-08-01

    An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  18. Visual perspective taking impairment in children with autistic spectrum disorder.

    Science.gov (United States)

    Hamilton, Antonia F de C; Brindley, Rachel; Frith, Uta

    2009-10-01

    Evidence from typical development and neuroimaging studies suggests that level 2 visual perspective taking - the knowledge that different people may see the same thing differently at the same time - is a mentalising task. Thus, we would expect children with autism, who fail typical mentalising tasks like false belief, to perform poorly on level 2 visual perspective taking as well. However, prior data on this issue are inconclusive. We re-examined this question, testing a group of 23 young autistic children, aged around 8years with a verbal mental age of around 4years and three groups of typical children (n=60) ranging in age from 4 to 8years on a level 2 visual perspective task and a closely matched mental rotation task. The results demonstrate that autistic children have difficulty with visual perspective taking compared to a task requiring mental rotation, relative to typical children. Furthermore, performance on the level 2 visual perspective taking task correlated with theory of mind performance. These findings resolve discrepancies in previous studies of visual perspective taking in autism, and demonstrate that level 2 visual perspective taking is a mentalising task.

  19. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    Directory of Open Access Journals (Sweden)

    Hamza Alzarok

    2017-01-01

    Full Text Available The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT. Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the

  20. Visual selective attention in amnestic mild cognitive impairment.

    Science.gov (United States)

    McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E

    2014-11-01

    Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Visual working memory capacity and proactive interference.

    Directory of Open Access Journals (Sweden)

    Joshua K Hartshorne

    Full Text Available BACKGROUND: Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. METHODOLOGY/PRINCIPAL FINDINGS: Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. CONCLUSIONS/SIGNIFICANCE: This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals.

  2. Endogenous visuospatial attention increases visual awareness independent of visual discrimination sensitivity.

    Science.gov (United States)

    Vernet, Marine; Japee, Shruti; Lokey, Savannah; Ahmed, Sara; Zachariou, Valentinos; Ungerleider, Leslie G

    2017-08-12

    Visuospatial attention often improves task performance by increasing signal gain at attended locations and decreasing noise at unattended locations. Attention is also believed to be the mechanism that allows information to enter awareness. In this experiment, we assessed whether orienting endogenous visuospatial attention with cues differentially affects visual discrimination sensitivity (an objective task performance) and visual awareness (the subjective feeling of perceiving) during the same discrimination task. Gabor patch targets were presented laterally, either at low contrast (contrast stimuli) or at high contrast embedded in noise (noise stimuli). Participants reported their orientation either in a 3-alternative choice task (clockwise, counterclockwise, unknown) that allowed for both objective and subjective reports, or in a 2-alternative choice task (clockwise, counterclockwise) that provided a control for objective reports. Signal detection theory models were fit to the experimental data: estimated perceptual sensitivity reflected objective performance; decision criteria, or subjective biases, were a proxy for visual awareness. Attention increased sensitivity (i.e., improved objective performance) for the contrast, but not for the noise stimuli. Indeed, with the latter, attention did not further enhance the already high target signal or reduce the already low uncertainty on its position. Interestingly, for both contrast and noise stimuli, attention resulted in more liberal criteria, i.e., awareness increased. The noise condition is thus an experimental configuration where people think they see the targets they attend to better, even if they do not. This could be explained by an internal representation of their attentional state, which influences awareness independent of objective visual signals. Copyright © 2017. Published by Elsevier Ltd.

  3. Alzheimer disease: functional abnormalities in the dorsal visual pathway.

    LENUS (Irish Health Repository)

    Bokde, Arun L W

    2012-02-01

    PURPOSE: To evaluate whether patients with Alzheimer disease (AD) have altered activation compared with age-matched healthy control (HC) subjects during a task that typically recruits the dorsal visual pathway. MATERIALS AND METHODS: The study was performed in accordance with the Declaration of Helsinki, with institutional ethics committee approval, and all subjects provided written informed consent. Two tasks were performed to investigate neural function: face matching and location matching. Twelve patients with mild AD and 14 age-matched HC subjects were included. Brain activation was measured by using functional magnetic resonance imaging. Group statistical analyses were based on a mixed-effects model corrected for multiple comparisons. RESULTS: Task performance was not statistically different between the two groups, and within groups there were no differences in task performance. In the HC group, the visual perception tasks selectively activated the visual pathways. Conversely in the AD group, there was no selective activation during performance of these same tasks. Along the dorsal visual pathway, the AD group recruited additional regions, primarily in the parietal and frontal lobes, for the location-matching task. There were no differences in activation between groups during the face-matching task. CONCLUSION: The increased activation in the AD group may represent a compensatory mechanism for decreased processing effectiveness in early visual areas of patients with AD. The findings support the idea that the dorsal visual pathway is more susceptible to putative AD-related neuropathologic changes than is the ventral visual pathway.

  4. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    Science.gov (United States)

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Measuring perceived ceiling height in a visual comparison task.

    Science.gov (United States)

    von Castell, Christoph; Hecht, Heiko; Oberfeld, Daniel

    2017-03-01

    When judging interior space, a dark ceiling is judged to be lower than a light ceiling. The method of metric judgments (e.g., on a centimetre scale) that has typically been used in such tasks may reflect a genuine perceptual effect or it may reflect a cognitively mediated impression. We employed a height-matching method in which perceived ceiling height had to be matched with an adjustable pillar, thus obtaining psychometric functions that allowed for an estimation of the point of subjective equality (PSE) and the difference limen (DL). The height-matching method developed in this paper allows for a direct visual match and does not require metric judgment. It has the added advantage of providing superior precision. Experiment 1 used ceiling heights between 2.90 m and 3.00 m. The PSE proved sensitive to slight changes in perceived ceiling height. The DL was about 3% of the physical ceiling height. Experiment 2 found similar results for lower (2.30 m to 2.50 m) and higher (3.30 m to 3.50 m) ceilings. In Experiment 3, we additionally varied ceiling lightness (light grey vs. dark grey). The height matches showed that the light ceiling appeared significantly higher than the darker ceiling. We therefore attribute the influence of ceiling lightness on perceived ceiling height to a direct perceptual rather than a cognitive effect.

  6. Effects of task and image properties on visual-attention deployment in image-quality assessment

    Science.gov (United States)

    Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid

    2015-03-01

    It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.

  7. Functional magnetic resonance imaging of visual object construction and shape discrimination : relations among task, hemispheric lateralization, and gender.

    Science.gov (United States)

    Georgopoulos, A P; Whang, K; Georgopoulos, M A; Tagaris, G A; Amirikian, B; Richter, W; Kim, S G; Uğurbil, K

    2001-01-01

    We studied the brain activation patterns in two visual image processing tasks requiring judgements on object construction (FIT task) or object sameness (SAME task). Eight right-handed healthy human subjects (four women and four men) performed the two tasks in a randomized block design while 5-mm, multislice functional images of the whole brain were acquired using a 4-tesla system using blood oxygenation dependent (BOLD) activation. Pairs of objects were picked randomly from a set of 25 oriented fragments of a square and presented to the subjects approximately every 5 sec. In the FIT task, subjects had to indicate, by pushing one of two buttons, whether the two fragments could match to form a perfect square, whereas in the SAME task they had to decide whether they were the same or not. In a control task, preceding and following each of the two tasks above, a single square was presented at the same rate and subjects pushed any of the two keys at random. Functional activation maps were constructed based on a combination of conservative criteria. The areas with activated pixels were identified using Talairach coordinates and anatomical landmarks, and the number of activated pixels was determined for each area. Altogether, 379 pixels were activated. The counts of activated pixels did not differ significantly between the two tasks or between the two genders. However, there were significantly more activated pixels in the left (n = 218) than the right side of the brain (n = 161). Of the 379 activated pixels, 371 were located in the cerebral cortex. The Talairach coordinates of these pixels were analyzed with respect to their overall distribution in the two tasks. These distributions differed significantly between the two tasks. With respect to individual dimensions, the two tasks differed significantly in the anterior--posterior and superior--inferior distributions but not in the left--right (including mediolateral, within the left or right side) distribution. Specifically

  8. Intrinsic motivation and attentional capture from gamelike features in a visual search task.

    Science.gov (United States)

    Miranda, Andrew T; Palmer, Evan M

    2014-03-01

    In psychology research studies, the goals of the experimenter and the goals of the participants often do not align. Researchers are interested in having participants who take the experimental task seriously, whereas participants are interested in earning their incentive (e.g., money or course credit) as quickly as possible. Creating experimental methods that are pleasant for participants and that reward them for effortful and accurate data generation, while not compromising the scientific integrity of the experiment, would benefit both experimenters and participants alike. Here, we explored a gamelike system of points and sound effects that rewarded participants for fast and accurate responses. We measured participant engagement at both cognitive and perceptual levels and found that the point system (which invoked subtle, anonymous social competition between participants) led to positive intrinsic motivation, while the sound effects (which were pleasant and arousing) led to attentional capture for rewarded colors. In a visual search task, points were awarded after each trial for fast and accurate responses, accompanied by short, pleasant sound effects. We adapted a paradigm from Anderson, Laurent, and Yantis (Proceedings of the National Academy of Sciences 108(25):10367-10371, 2011b), in which participants completed a training phase during which red and green targets were probabilistically associated with reward (a point bonus multiplier). During a test phase, no points or sounds were delivered, color was irrelevant to the task, and previously rewarded targets were sometimes presented as distractors. Significantly longer response times on trials in which previously rewarded colors were present demonstrated attentional capture, and positive responses to a five-question intrinsic-motivation scale demonstrated participant engagement.

  9. Autistic fluid intelligence: Increased reliance on visual functional connectivity with diminished modulation of coupling by task difficulty

    Science.gov (United States)

    Simard, Isabelle; Luck, David; Mottron, Laurent; Zeffiro, Thomas A.; Soulières, Isabelle

    2015-01-01

    Different test types lead to different intelligence estimates in autism, as illustrated by the fact that autistic individuals obtain higher scores on the Raven's Progressive Matrices (RSPM) test than they do on the Wechsler IQ, in contrast to relatively similar performance on both tests in non-autistic individuals. However, the cerebral processes underlying these differences are not well understood. This study investigated whether activity in the fluid “reasoning” network, which includes frontal, parietal, temporal and occipital regions, is differently modulated by task complexity in autistic and non-autistic individuals during the RSPM. In this purpose, we used fMRI to study autistic and non-autistic participants solving the 60 RSPM problems focussing on regions and networks involved in reasoning complexity. As complexity increased, activity in the left superior occipital gyrus and the left middle occipital gyrus increased for autistic participants, whereas non-autistic participants showed increased activity in the left middle frontal gyrus and bilateral precuneus. Using psychophysiological interaction analyses (PPI), we then verified in which regions did functional connectivity increase as a function of reasoning complexity. PPI analyses revealed greater connectivity in autistic, compared to non-autistic participants, between the left inferior occipital gyrus and areas in the left superior frontal gyrus, right superior parietal lobe, right middle occipital gyrus and right inferior temporal gyrus. We also observed generally less modulation of the reasoning network as complexity increased in autistic participants. These results suggest that autistic individuals, when confronted with increasing task complexity, rely mainly on visuospatial processes when solving more complex matrices. In addition to the now well-established enhanced activity observed in visual areas in a range of tasks, these results suggest that the enhanced reliance on visual perception has a

  10. Autistic fluid intelligence: Increased reliance on visual functional connectivity with diminished modulation of coupling by task difficulty

    Directory of Open Access Journals (Sweden)

    Isabelle Simard

    2015-01-01

    Full Text Available Different test types lead to different intelligence estimates in autism, as illustrated by the fact that autistic individuals obtain higher scores on the Raven's Progressive Matrices (RSPM test than they do on the Wechsler IQ, in contrast to relatively similar performance on both tests in non-autistic individuals. However, the cerebral processes underlying these differences are not well understood. This study investigated whether activity in the fluid “reasoning” network, which includes frontal, parietal, temporal and occipital regions, is differently modulated by task complexity in autistic and non-autistic individuals during the RSPM. In this purpose, we used fMRI to study autistic and non-autistic participants solving the 60 RSPM problems focussing on regions and networks involved in reasoning complexity. As complexity increased, activity in the left superior occipital gyrus and the left middle occipital gyrus increased for autistic participants, whereas non-autistic participants showed increased activity in the left middle frontal gyrus and bilateral precuneus. Using psychophysiological interaction analyses (PPI, we then verified in which regions did functional connectivity increase as a function of reasoning complexity. PPI analyses revealed greater connectivity in autistic, compared to non-autistic participants, between the left inferior occipital gyrus and areas in the left superior frontal gyrus, right superior parietal lobe, right middle occipital gyrus and right inferior temporal gyrus. We also observed generally less modulation of the reasoning network as complexity increased in autistic participants. These results suggest that autistic individuals, when confronted with increasing task complexity, rely mainly on visuospatial processes when solving more complex matrices. In addition to the now well-established enhanced activity observed in visual areas in a range of tasks, these results suggest that the enhanced reliance on visual

  11. Functional Activation during the Rapid Visual Information Processing Task in a Middle Aged Cohort: An fMRI Study.

    Science.gov (United States)

    Neale, Chris; Johnston, Patrick; Hughes, Matthew; Scholey, Andrew

    2015-01-01

    The Rapid Visual Information Processing (RVIP) task, a serial discrimination task where task performance believed to reflect sustained attention capabilities, is widely used in behavioural research and increasingly in neuroimaging studies. To date, functional neuroimaging research into the RVIP has been undertaken using block analyses, reflecting the sustained processing involved in the task, but not necessarily the transient processes associated with individual trial performance. Furthermore, this research has been limited to young cohorts. This study assessed the behavioural and functional magnetic resonance imaging (fMRI) outcomes of the RVIP task using both block and event-related analyses in a healthy middle aged cohort (mean age = 53.56 years, n = 16). The results show that the version of the RVIP used here is sensitive to changes in attentional demand processes with participants achieving a 43% accuracy hit rate in the experimental task compared with 96% accuracy in the control task. As shown by previous research, the block analysis revealed an increase in activation in a network of frontal, parietal, occipital and cerebellar regions. The event related analysis showed a similar network of activation, seemingly omitting regions involved in the processing of the task (as shown in the block analysis), such as occipital areas and the thalamus, providing an indication of a network of regions involved in correct trial performance. Frontal (superior and inferior frontal gryi), parietal (precuenus, inferior parietal lobe) and cerebellar regions were shown to be active in both the block and event-related analyses, suggesting their importance in sustained attention/vigilance. These networks and the differences between them are discussed in detail, as well as implications for future research in middle aged cohorts.

  12. Functional Activation during the Rapid Visual Information Processing Task in a Middle Aged Cohort: An fMRI Study.

    Directory of Open Access Journals (Sweden)

    Chris Neale

    Full Text Available The Rapid Visual Information Processing (RVIP task, a serial discrimination task where task performance believed to reflect sustained attention capabilities, is widely used in behavioural research and increasingly in neuroimaging studies. To date, functional neuroimaging research into the RVIP has been undertaken using block analyses, reflecting the sustained processing involved in the task, but not necessarily the transient processes associated with individual trial performance. Furthermore, this research has been limited to young cohorts. This study assessed the behavioural and functional magnetic resonance imaging (fMRI outcomes of the RVIP task using both block and event-related analyses in a healthy middle aged cohort (mean age = 53.56 years, n = 16. The results show that the version of the RVIP used here is sensitive to changes in attentional demand processes with participants achieving a 43% accuracy hit rate in the experimental task compared with 96% accuracy in the control task. As shown by previous research, the block analysis revealed an increase in activation in a network of frontal, parietal, occipital and cerebellar regions. The event related analysis showed a similar network of activation, seemingly omitting regions involved in the processing of the task (as shown in the block analysis, such as occipital areas and the thalamus, providing an indication of a network of regions involved in correct trial performance. Frontal (superior and inferior frontal gryi, parietal (precuenus, inferior parietal lobe and cerebellar regions were shown to be active in both the block and event-related analyses, suggesting their importance in sustained attention/vigilance. These networks and the differences between them are discussed in detail, as well as implications for future research in middle aged cohorts.

  13. PC-PVT 2.0: An updated platform for psychomotor vigilance task testing, analysis, prediction, and visualization.

    Science.gov (United States)

    Reifman, Jaques; Kumar, Kamal; Khitrov, Maxim Y; Liu, Jianbo; Ramakrishnan, Sridhar

    2018-07-01

    The psychomotor vigilance task (PVT) has been widely used to assess the effects of sleep deprivation on human neurobehavioral performance. To facilitate research in this field, we previously developed the PC-PVT, a freely available software system analogous to the "gold-standard" PVT-192 that, in addition to allowing for simple visual reaction time (RT) tests, also allows for near real-time PVT analysis, prediction, and visualization in a personal computer (PC). Here we present the PC-PVT 2.0 for Windows 10 operating system, which has the capability to couple PVT tests of a study protocol with the study's sleep/wake and caffeine schedules, and make real-time individualized predictions of PVT performance for such schedules. We characterized the accuracy and precision of the software in measuring RT, using 44 distinct combinations of PC hardware system configurations. We found that 15 system configurations measured RTs with an average delay of less than 10 ms, an error comparable to that of the PVT-192. To achieve such small delays, the system configuration should always use a gaming mouse as the means to respond to visual stimuli. We recommend using a discrete graphical processing unit for desktop PCs and an external monitor for laptop PCs. This update integrates a study's sleep/wake and caffeine schedules with the testing software, facilitating testing and outcome visualization, and provides near-real-time individualized PVT predictions for any sleep-loss condition considering caffeine effects. The software, with its enhanced PVT analysis, visualization, and prediction capabilities, can be freely downloaded from https://pcpvt.bhsai.org. Published by Elsevier B.V.

  14. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  15. Sensory modality specificity of neural activity related to memory in visual cortex.

    Science.gov (United States)

    Gibson, J R; Maunsell, J H

    1997-09-01

    Previous studies have shown that when monkeys perform a delayed match-to-sample (DMS) task, some neurons in inferotemporal visual cortex are activated selectively during the delay period when the animal must remember particular visual stimuli. This selective delay activity may be involved in short-term memory. It does not depend on visual stimulation: both auditory and tactile stimuli can trigger selective delay activity in inferotemporal cortex when animals expect to respond to visual stimuli in a DMS task. We have examined the overall modality specificity of delay period activity using a variety of auditory/visual cross-modal and unimodal DMS tasks. The cross-modal DMS tasks involved making specific long-term memory associations between visual and auditory stimuli, whereas the unimodal DMS tasks were standard identity matching tasks. Delay activity existed in auditory/visual cross-modal DMS tasks whether the animal anticipated responding to visual or auditory stimuli. No evidence of selective delay period activation was seen in a purely auditory DMS task. Delay-selective cells were relatively common in one animal where they constituted up to 53% neurons tested with a given task. This was only the case for up to 9% of cells in a second animal. In the first animal, a specific long-term memory representation for learned cross-modal associations was observed in delay activity, indicating that this type of representation need not be purely visual. Furthermore, in this same animal, delay activity in one cross-modal task, an auditory-to-visual task, predicted correct and incorrect responses. These results suggest that neurons in inferotemporal cortex contribute to abstract memory representations that can be activated by input from other sensory modalities, but these representations are specific to visual behaviors.

  16. How visual working memory contents influence priming of visual attention.

    Science.gov (United States)

    Carlisle, Nancy B; Kristjánsson, Árni

    2017-04-12

    Recent evidence shows that when the contents of visual working memory overlap with targets and distractors in a pop-out search task, intertrial priming is inhibited (Kristjánsson, Sævarsson & Driver, Psychon Bull Rev 20(3):514-521, 2013, Experiment 2, Psychonomic Bulletin and Review). This may reflect an interesting interaction between implicit short-term memory-thought to underlie intertrial priming-and explicit visual working memory. Evidence from a non-pop-out search task suggests that it may specifically be holding distractors in visual working memory that disrupts intertrial priming (Cunningham & Egeth, Psychol Sci 27(4):476-485, 2016, Experiment 2, Psychological Science). We examined whether the inhibition of priming depends on whether feature values in visual working memory overlap with targets or distractors in the pop-out search, and we found that the inhibition of priming resulted from holding distractors in visual working memory. These results are consistent with separate mechanisms of target and distractor effects in intertrial priming, and support the notion that the impact of implicit short-term memory and explicit visual working memory can interact when each provides conflicting attentional signals.

  17. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  18. Domain-specificity of creativity: a study on the relationship between Visual Creativity and Visual Mental Imagery’.

    Directory of Open Access Journals (Sweden)

    Massimiliano ePalmiero

    2015-12-01

    Full Text Available Creativity refers to the capability to catch original and valuable ideas and solutions. It involves different processes. In this study the extent to which visual creativity is related to cognitive processes underlying visual mental imagery was investigated. Fifty college students (25 women carried out: the Creative Synthesis Task, which measures the ability to produce creative objects belonging to a given category (originality, synthesis and transformation scores of pre-inventive forms, and originality and practicality scores of inventions were computed; an adaptation of Clark’s Drawing Ability Test, which measures the ability to produce actual creative artworks (graphic ability, aesthetic and creativity scores of drawings were assessed and three mental imagery tasks that investigate the three main cognitive processes involved in visual mental imagery: generation, inspection and transformation. Vividness of imagery and verbalizer-visualizer cognitive style were also measured using questionnaires. Correlation analysis revealed that all measures of the creativity tasks positively correlated with the image transformation imagery ability; practicality of inventions negatively correlated with vividness of imagery; originality of inventions positively correlated with the visualization cognitive style. However, regression analysis confirmed the predictive role of the transformation imagery ability only for the originality score of inventions and for the graphic ability and aesthetic scores of artistic drawings; on the other hand, the visualization cognitive style predicted the originality of inventions, whereas the vividness of imagery predicted practicality of inventions. These results are consistent with the notion that visual creativity is domain- and task-specific.

  19. The Wikipedia Image Retrieval Task

    NARCIS (Netherlands)

    T. Tsikrika (Theodora); J. Kludas

    2010-01-01

    htmlabstractThe wikipedia image retrieval task at ImageCLEF provides a testbed for the system-oriented evaluation of visual information retrieval from a collection of Wikipedia images. The aim is to investigate the effectiveness of retrieval approaches that exploit textual and visual evidence in the

  20. Psycho acoustical Measures in Individuals with Congenital Visual Impairment.

    Science.gov (United States)

    Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh

    2017-12-01

    In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.

  1. Cogito ergo video: Task-relevant information is involuntarily boosted into awareness.

    Science.gov (United States)

    Gayet, Surya; Brascamp, Jan W; Van der Stigchel, Stefan; Paffen, Chris L E

    2015-01-01

    Only part of the visual information that impinges on our retinae reaches visual awareness. In a series of three experiments, we investigated how the task relevance of incoming visual information affects its access to visual awareness. On each trial, participants were instructed to memorize one of two presented hues, drawn from different color categories (e.g., red and green), for later recall. During the retention interval, participants were presented with a differently colored grating in each eye such as to elicit binocular rivalry. A grating matched either the task-relevant (memorized) color category or the task-irrelevant (nonmemorized) color category. We found that the rivalrous stimulus that matched the task-relevant color category tended to dominate awareness over the rivalrous stimulus that matched the task-irrelevant color category. This effect of task relevance persisted when participants reported the orientation of the rivalrous stimuli, even though in this case color information was completely irrelevant for the task of reporting perceptual dominance during rivalry. When participants memorized the shape of a colored stimulus, however, its color category did not affect predominance of rivalrous stimuli during retention. Taken together, these results indicate that the selection of task-relevant information is under volitional control but that visual input that matches this information is boosted into awareness irrespective of whether this is useful for the observer.

  2. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  3. Task Selection is Critical for the Demonstration of Reciprocal Patterns of Sex Differences in Hand/Arm Motor Control and Near/Far Visual Processing

    Directory of Open Access Journals (Sweden)

    Geoff Sanders

    2008-04-01

    Full Text Available Women have been reported to perform better with hand rather than arm movements (Sanders and Walsh, 2007 and with visual stimuli in near rather than far space (Sanders, Sinclair and Walsh, 2007. Men performed better with the arm and in far space. These reciprocal patterns of sex differences appear as Muscle*Sex and Space*Sex interactions. We investigated these claims using target cancellation tasks in which task difficulty was manipulated by varying target size or the number of distracters. In Study 1 we did not find the Muscle*Sex or the Space*Sex interaction. We argue that ballistic movement was too simple to reveal the Muscle*Sex interaction. However, a trend for the Space*Sex interaction suggested task difficulty was set too high. Study 2 introduced easier levels of difficulty and the overall Space*Sex interaction narrowly failed to reach significance (p = 0.051. In Study 3 the Space*Sex interaction was significant (p = 0.001. A review of the present, and four previously published, studies indicates that task selection is critical if the Space*Sex interaction and its associated reciprocal within-sex differences are to be demonstrated without the obscuring effects of Space and Difficulty. These sex differences are compatible with predictions from the hunter-gatherer hypothesis. Implications for two-visual-system-models are considered.

  4. Visual memory and visual perception: when memory improves visual search.

    Science.gov (United States)

    Riou, Benoit; Lesourd, Mathieu; Brunel, Lionel; Versace, Rémy

    2011-08-01

    This study examined the relationship between memory and perception in order to identify the influence of a memory dimension in perceptual processing. Our aim was to determine whether the variation of typical size between items (i.e., the size in real life) affects visual search. In two experiments, the congruency between typical size difference and perceptual size difference was manipulated in a visual search task. We observed that congruency between the typical and perceptual size differences decreased reaction times in the visual search (Exp. 1), and noncongruency between these two differences increased reaction times in the visual search (Exp. 2). We argue that these results highlight that memory and perception share some resources and reveal the intervention of typical size difference on the computation of the perceptual size difference.

  5. The Impact of Task Demands on Fixation-Related Brain Potentials during Guided Search.

    Directory of Open Access Journals (Sweden)

    Anthony J Ries

    Full Text Available Recording synchronous data from EEG and eye-tracking provides a unique methodological approach for measuring the sensory and cognitive processes of overt visual search. Using this approach we obtained fixation related potentials (FRPs during a guided visual search task specifically focusing on the lambda and P3 components. An outstanding question is whether the lambda and P3 FRP components are influenced by concurrent task demands. We addressed this question by obtaining simultaneous eye-movement and electroencephalographic (EEG measures during a guided visual search task while parametrically modulating working memory load using an auditory N-back task. Participants performed the guided search task alone, while ignoring binaurally presented digits, or while using the auditory information in a 0, 1, or 2-back task. The results showed increased reaction time and decreased accuracy in both the visual search and N-back tasks as a function of auditory load. Moreover, high auditory task demands increased the P3 but not the lambda latency while the amplitude of both lambda and P3 was reduced during high auditory task demands. The results show that both early and late stages of visual processing indexed by FRPs are significantly affected by concurrent task demands imposed by auditory working memory.

  6. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Gerry, Lynda; Thomsen, Lui Albæk

    2017-01-01

    Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the "pop-out effect" can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally...... synched with a pop-out effect can improve reaction time in a visual search task, called the "pip and pop effect" [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo...

  7. Visual Descriptor Learning for Predicting Grasping Affordances

    DEFF Research Database (Denmark)

    Thomsen, Mikkel Tang

    2016-01-01

    by the task of grasping unknown objects given visual sensor information. The contributions from this thesis stem from three works that all relate to the task of grasping unknown objects but with particular focus on the visual representation part of the problem. First an investigation of a visual feature space...... consisting of surface features was performed. Dimensions in the visual space were varied and the effects were evaluated with the task of grasping unknown object. The evaluation was performed using a novel probabilistic grasp prediction approach based on neighbourhood analysis. The resulting success......-rates for predicting grasps were between 75% and 90% depending on the object class. The investigations also provided insights into the importance of selecting a proper visual feature space when utilising it for predicting affordances. As a consequence of the gained insights, a semi-local surface feature, the Sliced...

  8. What you say matters: exploring visual-verbal interactions in visual working memory.

    Science.gov (United States)

    Mate, Judit; Allen, Richard J; Baqués, Josep

    2012-01-01

    The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.

  9. The impact of ageing and gender on visual mental imagery processes: A study of performance on tasks from the Complete Visual Mental Imagery Battery (CVMIB).

    Science.gov (United States)

    Palermo, Liana; Piccardi, Laura; Nori, Raffaella; Giusberti, Fiorella; Guariglia, Cecilia

    2016-09-01

    In this study we aim to evaluate the impact of ageing and gender on different visual mental imagery processes. Two hundred and fifty-one participants (130 women and 121 men; age range = 18-77 years) were given an extensive neuropsychological battery including tasks probing the generation, maintenance, inspection, and transformation of visual mental images (Complete Visual Mental Imagery Battery, CVMIB). Our results show that all mental imagery processes with the exception of the maintenance are affected by ageing, suggesting that other deficits, such as working memory deficits, could account for this effect. However, the analysis of the transformation process, investigated in terms of mental rotation and mental folding skills, shows a steeper decline in mental rotation, suggesting that age could affect rigid transformations of objects and spare non-rigid transformations. Our study also adds to previous ones in showing gender differences favoring men across the lifespan in the transformation process, and, interestingly, it shows a steeper decline in men than in women in inspecting mental images, which could partially account for the mixed results about the effect of ageing on this specific process. We also discuss the possibility to introduce the CVMIB in clinical assessment in the context of theoretical models of mental imagery.

  10. The Role of Visual-Spatial Abilities in Dyslexia: Age Differences in Children's Reading?

    Science.gov (United States)

    Giovagnoli, Giulia; Vicari, Stefano; Tomassetti, Serena; Menghini, Deny

    2016-01-01

    Reading is a highly complex process in which integrative neurocognitive functions are required. Visual-spatial abilities play a pivotal role because of the multi-faceted visual sensory processing involved in reading. Several studies show that children with developmental dyslexia (DD) fail to develop effective visual strategies and that some reading difficulties are linked to visual-spatial deficits. However, the relationship between visual-spatial skills and reading abilities is still a controversial issue. Crucially, the role that age plays has not been investigated in depth in this population, and it is still not clear if visual-spatial abilities differ across educational stages in DD. The aim of the present study was to investigate visual-spatial abilities in children with DD and in age-matched normal readers (NR) according to different educational stages: in children attending primary school and in children and adolescents attending secondary school. Moreover, in order to verify whether visual-spatial measures could predict reading performance, a regression analysis has been performed in younger and older children. The results showed that younger children with DD performed significantly worse than NR in a mental rotation task, a more-local visual-spatial task, a more-global visual-perceptual task and a visual-motor integration task. However, older children with DD showed deficits in the more-global visual-perceptual task, in a mental rotation task and in a visual attention task. In younger children, the regression analysis documented that reading abilities are predicted by the visual-motor integration task, while in older children only the more-global visual-perceptual task predicted reading performances. Present findings showed that visual-spatial deficits in children with DD were age-dependent and that visual-spatial abilities engaged in reading varied across different educational stages. In order to better understand their potential role in affecting reading

  11. Accurate expectancies diminish perceptual distraction during visual search

    Science.gov (United States)

    Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

    2014-01-01

    The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

  12. Accurate expectancies diminish perceptual distraction during visual search

    Directory of Open Access Journals (Sweden)

    Jocelyn L Sy

    2014-05-01

    Full Text Available The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively spills-over to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, fMRI, and electrophysiology. Expectations were generated by a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean BOLD responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information.

  13. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4.

    Science.gov (United States)

    Braun, J

    1994-02-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.

  14. Performance on Auditory and Visual Tasks of Inhibition in English Monolingual and Spanish-English Bilingual Adults: Do Bilinguals Have a Cognitive Advantage?

    Science.gov (United States)

    Desjardins, Jamie L.; Fernandez, Francisco

    2018-01-01

    Purpose: Bilingual individuals have been shown to be more proficient on visual tasks of inhibition compared with their monolingual counterparts. However, the bilingual advantage has not been evidenced in all studies, and very little is known regarding how bilingualism influences inhibitory control in the perception of auditory information. The…

  15. The Effects of Adding Coordinate Axes To a Mental Rotations Task in Measuring Spatial Visualization Ability in Introductory Undergraduate Technical Graphics Courses.

    Science.gov (United States)

    Branoff, Ted

    1998-01-01

    Reports on a study to determine whether the presence of coordinate axes in a test of spatial-visualization ability affects scores and response times on a mental-rotations task for students enrolled in undergraduate introductory graphic communications classes. Based on Pavios's dual-coding theory. Contains 36 references. (DDR)

  16. Words, shape, visual search and visual working memory in 3-year-old children.

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  17. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2014-01-01

    In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  18. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    Directory of Open Access Journals (Sweden)

    Masahiro eKawasaki

    2014-03-01

    Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  19. Effects of monitoring for visual events on distinct components of attention

    Directory of Open Access Journals (Sweden)

    Christian H. Poth

    2014-08-01

    Full Text Available Monitoring the environment for visual events while performing a concurrent task requires adjustment of visual processing priorities. By use of Bundesen's (1990 Theory of Visual Attention (TVA, we investigated how monitoring for an object-based brief event affected distinct components of visual attention in a concurrent task. The perceptual salience of the event was varied. Monitoring reduced the processing speed in the concurrent task, and the reduction was stronger when the event was less salient. The monitoring task neither affected the temporal threshold of conscious perception nor the storage capacity of visual short-term memory nor the efficiency of top-down controlled attentional selection.

  20. Components of working memory and visual selective attention.

    Science.gov (United States)

    Burnham, Bryan R; Sabia, Matthew; Langan, Catherine

    2014-02-01

    Load theory (Lavie, N., Hirst, A., De Fockert, J. W., & Viding, E. [2004]. Load theory of selective attention and cognitive control. Journal of Experimental Psychology: General, 133, 339-354.) proposes that control of attention depends on the amount and type of load that is imposed by current processing. Specifically, perceptual load should lead to efficient distractor rejection, whereas working memory load (dual-task coordination) should hinder distractor rejection. Studies support load theory's prediction that working memory load will lead to larger distractor effects; however, these studies used secondary tasks that required only verbal working memory and the central executive. The present study examined which other working memory components (visual, spatial, and phonological) influence visual selective attention. Subjects completed an attentional capture task alone (single-task) or while engaged in a working memory task (dual-task). Results showed that along with the central executive, visual and spatial working memory influenced selective attention, but phonological working memory did not. Specifically, attentional capture was larger when visual or spatial working memory was loaded, but phonological working memory load did not affect attentional capture. The results are consistent with load theory and suggest specific components of working memory influence visual selective attention. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. A comparison of tracking with visual and kinesthetic-tactual displays

    Science.gov (United States)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1981-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under appropriate conditions it may be an effective means of providing visual workload relief. In order to better understand how KT tracking differs from visual tracking, both a critical tracking task and stationary single-axis tracking tasks were conducted with and without velocity quickening. On the critical tracking task, the visual displays were superior; however, the KT quickened display was approximately equal to the visual unquickened display. Mean squared error scores in the stationary tracking tasks for the visual and KT displays were approximately equal in the quickened conditions, and the describing functions were very similar. In the unquickened conditions, the visual display was superior. Subjects using the unquickened KT display exhibited a low frequency lead-lag that may be related to sensory adaptation.

  2. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity.

    Science.gov (United States)

    Simon, Sharon S; Tusch, Erich S; Holcomb, Phillip J; Daffner, Kirk R

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.

  3. Increasing working memory load reduces processing of cross-modal task-irrelevant stimuli even after controlling for task difficulty and executive capacity

    Directory of Open Access Journals (Sweden)

    Sharon Sanz Simon

    2016-08-01

    Full Text Available The classic account of the Load Theory (LT of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual and task-irrelevant modality (auditory. Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.

  4. Measuring listening effort: driving simulator versus simple dual-task paradigm.

    Science.gov (United States)

    Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth

    2014-01-01

    The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omnidirectional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants' better speech recognition performance did not result in reduced listening effort was not

  5. Visual cue-specific craving is diminished in stressed smokers.

    Science.gov (United States)

    Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R

    2017-09-01

    Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.

  6. Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories.

    Science.gov (United States)

    Seemüller, Anna; Fiehler, Katja; Rösler, Frank

    2011-01-01

    The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. The effect of four user interface concepts on visual scan pattern similarity and information foraging in a complex decision making task.

    Science.gov (United States)

    Starke, Sandra D; Baber, Chris

    2018-07-01

    User interface (UI) design can affect the quality of decision making, where decisions based on digitally presented content are commonly informed by visually sampling information through eye movements. Analysis of the resulting scan patterns - the order in which people visually attend to different regions of interest (ROIs) - gives an insight into information foraging strategies. In this study, we quantified scan pattern characteristics for participants engaging with conceptually different user interface designs. Four interfaces were modified along two dimensions relating to effort in accessing information: data presentation (either alpha-numerical data or colour blocks), and information access time (all information sources readily available or sequential revealing of information required). The aim of the study was to investigate whether a) people develop repeatable scan patterns and b) different UI concepts affect information foraging and task performance. Thirty-two participants (eight for each UI concept) were given the task to correctly classify 100 credit card transactions as normal or fraudulent based on nine transaction attributes. Attributes varied in their usefulness of predicting the correct outcome. Conventional and more recent (network analysis- and bioinformatics-based) eye tracking metrics were used to quantify visual search. Empirical findings were evaluated in context of random data and possible accuracy for theoretical decision making strategies. Results showed short repeating sequence fragments within longer scan patterns across participants and conditions, comprising a systematic and a random search component. The UI design concept showing alpha-numerical data in full view resulted in most complete data foraging, while the design concept showing colour blocks in full view resulted in the fastest task completion time. Decision accuracy was not significantly affected by UI design. Theoretical calculations showed that the difference in achievable

  8. The Attentional Boost Effect: Transient increases in attention to one task enhance performance in a second task.

    Science.gov (United States)

    Swallow, Khena M; Jiang, Yuhong V

    2010-04-01

    Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). Copyright 2009 Elsevier B.V. All rights reserved.

  9. Combining program visualization with programming workspace to assist students for completing programming laboratory task

    Directory of Open Access Journals (Sweden)

    Elvina Elvina

    2018-06-01

    Full Text Available Numerous Program Visualization tools (PVs have been developed for assisting novice students to understand their source code further. However, none of them are practical to be used in the context of completing programming laboratory task; students are required to keep switching between PV and programming workspace when they need to know how their code works. This paper combines PV with programming workspace to handle such issue. Resulted tool (which is named PITON has 13 features extracted from PythonTutor, PyCharm, and student’s feedbacks about PythonTutor. According to think-aloud and user study, PITON is more practical to be used than a combination of PythonTutor and PyCharm. Further, its features are considerably helpful; students rated these features as useful and frequently used.

  10. Cognitive load effects on early visual perceptual processing.

    Science.gov (United States)

    Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia

    2018-05-01

    Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.

  11. The integration of temporally shifted visual feedback in a synchronization task: The role of perceptual stability in a visuo-proprioceptive conflict situation.

    Science.gov (United States)

    Ceux, Tanja; Montagne, Gilles; Buekers, Martinus J

    2010-12-01

    The present study examined whether the beneficial role of coherently grouped visual motion structures for performing complex (interlimb) coordination patterns can be generalized to synchronization behavior in a visuo-proprioceptive conflict situation. To achieve this goal, 17 participants had to synchronize a self-moved circle, representing the arm movement, with a visual target signal corresponding to five temporally shifted visual feedback conditions (0%, 25%, 50%, 75%, and 100% of the target cycle duration) in three synchronization modes (in-phase, anti-phase, and intermediate). The results showed that the perception of a newly generated perceptual Gestalt between the visual feedback of the arm and the target signal facilitated the synchronization performance in the preferred in-phase synchronization mode in contrast to the less stable anti-phase and intermediate mode. Our findings suggest that the complexity of the synchronization mode defines to what extent the visual and/or proprioceptive information source affects the synchronization performance in the present unimanual synchronization task. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Performance improvements from imagery:evidence that internal visual imagery is superior to external visual imagery for slalom performance

    Directory of Open Access Journals (Sweden)

    Nichola eCallow

    2013-10-01

    Full Text Available We report three experiments investigating the hypothesis that use of internal visual imagery (IVI would be superior to external visual imagery (EVI for the performance of different slalom-based motor tasks. In Experiment 1, three groups of participants (IVI, EVI, and a control group performed a driving-simulation slalom task. The IVI group achieved significantly quicker lap times than EVI and the control group. In Experiment 2, participants performed a downhill running slalom task under both IVI and EVI conditions. Performance was again quickest in the IVI compared to EVI condition, with no differences in accuracy. Experiment 3 used the same group design as Experiment 1, but with participants performing a downhill ski-slalom task. Results revealed the IVI group to be significantly more accurate than the control group, with no significant differences in time taken to complete the task. These results support the beneficial effects of IVI for slalom-based tasks, and significantly advances our knowledge related to the differential effects of visual imagery perspectives on motor performance.

  13. Robust visual tracking via structured multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Ahuja, Narendra

    2012-01-01

    In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary

  14. Perceptual learning in children with visual impairment improves near visual acuity.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  15. Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.

    Science.gov (United States)

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2011-10-01

    The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. The Commingled Division of Visual Attention.

    Directory of Open Access Journals (Sweden)

    Yuechuan Sun

    Full Text Available Many critical activities require visual attention to be distributed simultaneously among distinct tasks where the attended foci are not spatially separated. In our two experiments, participants performed a large number of trials where both a primary task (enumeration of spots and a secondary task (reporting the presence/absence or identity of a distinctive shape required the division of visual attention. The spots and the shape were commingled spatially and the shape appeared unpredictably on a relatively small fraction of the trials. The secondary task stimulus (the shape was reported in inverse proportion to the attentional load imposed by the primary task (enumeration of spots. When the shape did appear, performance on the primary task (enumeration suffered relative to when the shape was absent; both speed and accuracy were compromised. When the secondary task required identification in addition to detection, reaction times increased by about 200 percent. These results are broadly compatible with biased competition models of perceptual processing. An important area of application, where the commingled division of visual attention is required, is the augmented reality head-up display (AR-HUD. This innovation has the potential to make operating vehicles safer but our data suggest that there are significant concerns regarding driver distraction.

  17. Improving Design Communication: Advanced Visualization

    National Research Council Canada - National Science Library

    Adeoye, Blessing

    2001-01-01

    .... While design professionals may use similar visual modes (lines, text, graphic symbols, etc.) to represent and communicate concepts in complex drawing tasks, similar visual modes may be used ambiguously across disciplines...

  18. Effects of Distracting Task with Different Mental Workload on Steady-State Visual Evoked Potential Based Brain Computer Interfaces—an Offline Study

    Directory of Open Access Journals (Sweden)

    Yawei Zhao

    2018-02-01

    Full Text Available Brain-computer interfaces (BCIs, independent of the brain's normal output pathways, are attracting an increasing amount of attention as devices that extract neural information. As a typical type of BCI system, the steady-state visual evoked potential (SSVEP-based BCIs possess a high signal-to-noise ratio and information transfer rate. However, the current high speed SSVEP-BCIs were implemented with subjects concentrating on stimuli, and intentionally avoided additional tasks as distractors. This paper aimed to investigate how a distracting simultaneous task, a verbal n-back task with different mental workload, would affect the performance of SSVEP-BCI. The results from fifteen subjects revealed that the recognition accuracy of SSVEP-BCI was significantly impaired by the distracting task, especially under a high mental workload. The average classification accuracy across all subjects dropped by 8.67% at most from 1- to 4-back, and there was a significant negative correlation (maximum r = −0.48, p < 0.001 between accuracy and subjective mental workload evaluation of the distracting task. This study suggests a potential hindrance for the SSVEP-BCI daily use, and then improvements should be investigated in the future studies.

  19. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    Science.gov (United States)

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  20. Visual search deficits in amblyopia.

    Science.gov (United States)

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2018-04-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.

  1. Semantic elaboration in auditory and visual spatial memory.

    Science.gov (United States)

    Taevs, Meghan; Dahmani, Louisa; Zatorre, Robert J; Bohbot, Véronique D

    2010-01-01

    The aim of this study was to investigate the hypothesis that semantic information facilitates auditory and visual spatial learning and memory. An auditory spatial task was administered, whereby healthy participants were placed in the center of a semi-circle that contained an array of speakers where the locations of nameable and non-nameable sounds were learned. In the visual spatial task, locations of pictures of abstract art intermixed with nameable objects were learned by presenting these items in specific locations on a computer screen. Participants took part in both the auditory and visual spatial tasks, which were counterbalanced for order and were learned at the same rate. Results showed that learning and memory for the spatial locations of nameable sounds and pictures was significantly better than for non-nameable stimuli. Interestingly, there was a cross-modal learning effect such that the auditory task facilitated learning of the visual task and vice versa. In conclusion, our results support the hypotheses that the semantic representation of items, as well as the presentation of items in different modalities, facilitate spatial learning and memory.

  2. Correlated individual differences suggest a common mechanism underlying metacognition in visual perception and visual short-term memory.

    Science.gov (United States)

    Samaha, Jason; Postle, Bradley R

    2017-11-29

    Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).

  3. Exploring Visual Bookmarks and Layered Visualizations

    NARCIS (Netherlands)

    J.T. Teuben (Jan)

    2010-01-01

    textabstractCultural heritage experts are confronted with a difficult information gathering task while conducting comparison searches. Saving searches and re-examining previous work could help them to do their work. In this paper we propose a solution in which we combine visual bookmarks for saving

  4. Effects of visual attention on chromatic and achromatic detection sensitivities.

    Science.gov (United States)

    Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko

    2014-05-01

    Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.

  5. The effects of memory load and stimulus relevance on the EEG during a visual selective memory search task : An ERP and ERD/ERS study

    NARCIS (Netherlands)

    Gomarus, HK; Althaus, M; Wijers, AA; Minderaa, RB

    Objective: Psychophysiological correlates of selective attention and working memory were investigated in a group of 18 healthy children using a visually presented selective mernory search task. Methods: Subjects had to memorize one (load 1) or 3 (load3) letters (memory set) and search for these

  6. Method matters: systematic effects of testing procedure on visual working memory sensitivity.

    Science.gov (United States)

    Makovski, Tal; Watson, Leah M; Koutstaal, Wilma; Jiang, Yuhong V

    2010-11-01

    Visual working memory (WM) is traditionally considered a robust form of visual representation that survives changes in object motion, observer's position, and other visual transients. This article presents data that are inconsistent with the traditional view. We show that memory sensitivity is dramatically influenced by small variations in the testing procedure, supporting the idea that representations in visual WM are susceptible to interference from testing. In the study, participants were shown an array of colors to remember. After a short retention interval, memory for one of the items was tested with either a same-different task or a 2-alternative-forced-choice (2AFC) task. Memory sensitivity was much lower in the 2AFC task than in the same-different task. This difference was found regardless of encoding similarity or of whether visual WM required a fine or coarse memory resolution. The 2AFC disadvantage was reduced when participants were informed shortly before testing which item would be probed. The 2AFC disadvantage diminished in perceptual tasks and was not found in tasks probing visual long-term memory. These results support memory models that acknowledge the labile nature of visual WM and have implications for the format of visual WM and its assessment. (c) 2010 APA, all rights reserved

  7. Improved discrimination of visual stimuli following repetitive transcranial magnetic stimulation.

    Directory of Open Access Journals (Sweden)

    Michael L Waterston

    Full Text Available BACKGROUND: Repetitive transcranial magnetic stimulation (rTMS at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a "virtual lesion" in stimulated brain regions, with correspondingly diminished behavioral performance. METHODOLOGY/PRINCIPAL FINDINGS: Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. CONCLUSIONS/SIGNIFICANCE: Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception.

  8. Kinesthetic Imagery Provides Additive Benefits to Internal Visual Imagery on Slalom Task Performance.

    Science.gov (United States)

    Callow, Nichola; Jiang, Dan; Roberts, Ross; Edwards, Martin G

    2017-02-01

    Recent brain imaging research demonstrates that the use of internal visual imagery (IVI) or kinesthetic imagery (KIN) activates common and distinct brain areas. In this paper, we argue that combining the imagery modalities (IVI and KIN) will lead to a greater cognitive representation (with more brain areas activated), and this will cause a greater slalom-based motor performance compared with using IVI alone. To examine this assertion, we randomly allocated 56 participants to one of the three groups: IVI, IVI and KIN, or a math control group. Participants performed a slalom-based driving task in a driving simulator, with average lap time used as a measure of performance. Results revealed that the IVI and KIN group achieved significantly quicker lap times than the IVI and the control groups. The discussion includes a theoretical advancement on why the combination of imagery modalities might facilitate performance, with links made to the cognitive neuroscience literature and applied practice.

  9. The Effect of Prior Task Success on Older Adults' Memory Performance: Examining the Influence of Different Types of Task Success.

    Science.gov (United States)

    Geraci, Lisa; Hughes, Matthew L; Miller, Tyler M; De Forrest, Ross L

    2016-01-01

    Negative aging stereotypes can lead older adults to perform poorly on memory tests. Yet, memory performance can be improved if older adults have a single successful experience on a cognitive test prior to participating in a memory experiment (Geraci & Miller, 2013, Psychology and Aging, 28, 340-345). The current study examined the effects of different types of prior task experience on subsequent memory performance. Before participating in a verbal free recall experiment, older adults in Experiment 1 successfully completed either a verbal or a visual cognitive task or no task. In Experiment 2, they successfully completed either a motor task or no task before participating in the free recall experiment. Results from Experiment 1 showed that relative to control (no prior task), participants who had prior success, either on a verbal or a visual task, had better subsequent recall performance. Experiment 2 showed that prior success on a motor task, however, did not lead to a later memory advantage relative to control. These findings demonstrate that older adults' memory can be improved by a successful prior task experience so long as that experience is in a cognitive domain.

  10. Concurrent performance of two memory tasks: evidence for domain-specific working memory systems.

    Science.gov (United States)

    Cocchini, Gianna; Logie, Robert H; Della Sala, Sergio; MacPherson, Sarah E; Baddeley, Alan D

    2002-10-01

    Previous studies of dual-task coordination in working memory have shown a lack of dual-task interference when a verbal memory task is combined with concurrent perceptuomotor tracking. Two experiments are reported in which participants were required to perform pairwise combinations of (1) a verbal memory task, a visual memory task, and perceptuomotor tracking (Experiment 1), and (2) pairwise combinations of the two memory tasks and articulatory suppression (Experiment 2). Tracking resulted in no disruption of the verbal memory preload over and above the impact of a delay in recall and showed only minimal disruption of the retention of the visual memory load. Performing an ongoing verbal memory task had virtually no impact on retention of a visual memory preload or vice versa, indicating that performing two demanding memory tasks results in little mutual interference. Experiment 2 also showed minimal disruption when the two memory tasks were combined, although verbal memory (but not visual memory) was clearly disrupted by articulatory suppression interpolated between presentation and recall. These data suggest that a multiple-component working memory model provides a better account for performance in concurrent immediate memory tasks than do theories that assume a single processing and storage system or a limited-capacity attentional system coupled with activated memory traces.

  11. An fMRI investigation into the effect of preceding stimuli during visual oddball tasks.

    Science.gov (United States)

    Fajkus, Jiří; Mikl, Michal; Shaw, Daniel Joel; Brázdil, Milan

    2015-08-15

    This study investigates the modulatory effect of stimulus sequence on neural responses to novel stimuli. A group of 34 healthy volunteers underwent event-related functional magnetic resonance imaging while performing a three-stimulus visual oddball task, involving randomly presented frequent stimuli and two types of infrequent stimuli - targets and distractors. We developed a modified categorization of rare stimuli that incorporated the type of preceding rare stimulus, and analyzed the event-related functional data according to this sequence categorization; specifically, we explored hemodynamic response modulation associated with increasing rare-to-rare stimulus interval. For two consecutive targets, a modulation of brain function was evident throughout posterior midline and lateral temporal cortex, while responses to targets preceded by distractors were modulated in a widely distributed fronto-parietal system. As for distractors that follow targets, brain function was modulated throughout a set of posterior brain structures. For two successive distractors, however, no significant modulation was observed, which is consistent with previous studies and our primary hypothesis. The addition of the aforementioned technique extends the possibilities of conventional oddball task analysis, enabling researchers to explore the effects of the whole range of rare stimuli intervals. This methodology can be applied to study a wide range of associated cognitive mechanisms, such as decision making, expectancy and attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Robust visual tracking via structured multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-11-09

    In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing lp,q mixed norms (specifically p∈2,∞ and q=1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259-2272, 2011) is a special case of our MTT formulation (denoted as the L11 tracker) when p=q=1. Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers. © 2012 Springer Science+Business Media New York.

  13. Interactive data visualization foundations, techniques, and applications

    CERN Document Server

    Ward, Matthew; Keim, Daniel

    2010-01-01

    Visualization is the process of representing data, information, and knowledge in a visual form to support the tasks of exploration, confirmation, presentation, and understanding. This book is designed as a textbook for students, researchers, analysts, professionals, and designers of visualization techniques, tools, and systems. It covers the full spectrum of the field, including mathematical and analytical aspects, ranging from its foundations to human visual perception; from coded algorithms for different types of data, information and tasks to the design and evaluation of new visualization techniques. Sample programs are provided as starting points for building one's own visualization tools. Numerous data sets have been made available that highlight different application areas and allow readers to evaluate the strengths and weaknesses of different visualization methods. Exercises, programming projects, and related readings are given for each chapter. The book concludes with an examination of several existin...

  14. Investigating the Impact of Dual Task Condition and Visual Manipulation on Healthy Young Old During Non-Dominant Leg Stance

    Directory of Open Access Journals (Sweden)

    Bahareh Zeynalzadeh Ghoochani

    2017-06-01

    Discussion: Standing on non-dominant leg is a challenging task that requires a well-balanced system to survive the primary decreased somatosensory input. Therefore, the examinee had to have the requisite capabilities to cope with the changes caused when extra manipulation was included. During the course of the study, the most challenging situation was encountered when the subjects were standing on their non-dominant leg with eyes shut, which should be exactingly checked not to create a risky point as an Achilles’ heel of balance system. It was observed that the non-dominant leg was more susceptible to be affected when an aging adult did not have access to the visual input or during performing dual tasks with eyes shut. It is thus recommended that such conditions should be included in balance assessment tests or interventions.

  15. Evidence for unlimited capacity processing of simple features in visual cortex.

    Science.gov (United States)

    White, Alex L; Runeson, Erik; Palmer, John; Ernst, Zachary R; Boynton, Geoffrey M

    2017-06-01

    Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level-dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity.

  16. Global-local visual biases correspond with visual-spatial orientation.

    Science.gov (United States)

    Basso, Michael R; Lowery, Natasha

    2004-02-01

    Within the past decade, numerous investigations have demonstrated reliable associations of global-local visual processing biases with right and left hemisphere function, respectively (cf. Van Kleeck, 1989). Yet the relevance of these biases to other cognitive functions is not well understood. Towards this end, the present research examined the relationship between global-local visual biases and perception of visual-spatial orientation. Twenty-six women and 23 men completed a global-local judgment task (Kimchi and Palmer, 1982) and the Judgment of Line Orientation Test (JLO; Benton, Sivan, Hamsher, Varney, and Spreen, 1994), a measure of visual-spatial orientation. As expected, men had better performance on JLO. Extending previous findings, global biases were related to better visual-spatial acuity on JLO. The findings suggest that global-local biases and visual-spatial orientation may share underlying cerebral mechanisms. Implications of these findings for other visually mediated cognitive outcomes are discussed.

  17. The case against specialized visual-spatial short-term memory.

    Science.gov (United States)

    Morey, Candice C

    2018-05-24

    The dominant paradigm for understanding working memory, or the combination of the perceptual, attentional, and mnemonic processes needed for thinking, subdivides short-term memory (STM) according to whether memoranda are encoded in aural-verbal or visual formats. This traditional dissociation has been supported by examples of neuropsychological patients who seem to selectively lack STM for either aural-verbal, visual, or spatial memoranda, and by experimental research using dual-task methods. Though this evidence is the foundation of assumptions of modular STM systems, the case it makes for a specialized visual STM system is surprisingly weak. I identify the key evidence supporting a distinct verbal STM system-patients with apparent selective damage to verbal STM and the resilience of verbal short-term memories to general dual-task interference-and apply these benchmarks to neuropsychological and experimental investigations of visual-spatial STM. Contrary to the evidence on verbal STM, patients with apparent visual or spatial STM deficits tend to experience a wide range of additional deficits, making it difficult to conclude that a distinct short-term store was damaged. Consistently with this, a meta-analysis of dual-task visual-spatial STM research shows that robust dual-task costs are consistently observed regardless of the domain or sensory code of the secondary task. Together, this evidence suggests that positing a specialized visual STM system is not necessary. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Using frequency tagging to quantify attentional deployment in a visual divided attention task.

    Science.gov (United States)

    Toffanin, Paolo; de Jong, Ritske; Johnson, Addie; Martens, Sander

    2009-06-01

    Frequency tagging is an EEG method based on the quantification of the steady state visual evoked potential (SSVEP) elicited from stimuli which flicker with a distinctive frequency. Because the amplitude of the SSVEP is modulated by attention such that attended stimuli elicit higher SSVEP amplitudes than do ignored stimuli, the method has been used to investigate the neural mechanisms of spatial attention. However, up to now it has not been shown whether the amplitude of the SSVEP is sensitive to gradations of attention and there has been debate about whether attention effects on the SSVEP are dependent on the tagging frequency used. We thus compared attention effects on SSVEP across three attention conditions-focused, divided, and ignored-with six different tagging frequencies. Participants performed a visual detection task (respond to the digit 5 embedded in a stream of characters). Two stimulus streams, one to the left and one to the right of fixation, were displayed simultaneously, each with a background grey square whose hue was sine-modulated with one of the six tagging frequencies. At the beginning of each trial a cue indicated whether targets on the left, right, or both sides should be responded to. Accuracy was higher in the focused- than in the divided-attention condition. SSVEP amplitudes were greatest in the focused-attention condition, intermediate in the divided-attention condition, and smallest in the ignored-attention condition. The effect of attention on SSVEP amplitude did not depend on the tagging frequency used. Frequency tagging appears to be a flexible technique for studying attention.

  19. Task-relevant information is prioritized in spatiotemporal contextual cueing.

    Science.gov (United States)

    Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun

    2016-11-01

    Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.

  20. Mental Imagery and Visual Working Memory

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage. PMID:22195024

  1. Mental imagery and visual working memory.

    Directory of Open Access Journals (Sweden)

    Rebecca Keogh

    Full Text Available Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  2. Mental imagery and visual working memory.

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  3. Local and global limits on visual processing in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Marc S Tibber

    Full Text Available Schizophrenia has been linked to impaired performance on a range of visual processing tasks (e.g. detection of coherent motion and contour detection. It has been proposed that this is due to a general inability to integrate visual information at a global level. To test this theory, we assessed the performance of people with schizophrenia on a battery of tasks designed to probe voluntary averaging in different visual domains. Twenty-three outpatients with schizophrenia (mean age: 40±8 years; 3 female and 20 age-matched control participants (mean age 39±9 years; 3 female performed a motion coherence task and three equivalent noise (averaging tasks, the latter allowing independent quantification of local and global limits on visual processing of motion, orientation and size. All performance measures were indistinguishable between the two groups (ps>0.05, one-way ANCOVAs, with one exception: participants with schizophrenia pooled fewer estimates of local orientation than controls when estimating average orientation (p = 0.01, one-way ANCOVA. These data do not support the notion of a generalised visual integration deficit in schizophrenia. Instead, they suggest that distinct visual dimensions are differentially affected in schizophrenia, with a specific impairment in the integration of visual orientation information.

  4. Functional roles of 10 Hz alpha-band power modulating engagement and disengagement of cortical networks in a complex visual motion task.

    Directory of Open Access Journals (Sweden)

    Kunjan D Rana

    Full Text Available Alpha band power, particularly at the 10 Hz frequency, is significantly involved in sensory inhibition, attention modulation, and working memory. However, the interactions between cortical areas and their relationship to the different functional roles of the alpha band oscillations are still poorly understood. Here we examined alpha band power and the cortico-cortical interregional phase synchrony in a psychophysical task involving the detection of an object moving in depth by an observer in forward self-motion. Wavelet filtering at the 10 Hz frequency revealed differences in the profile of cortical activation in the visual processing regions (occipital and parietal lobes and in the frontoparietal regions. The alpha rhythm driving the visual processing areas was found to be asynchronous with the frontoparietal regions. These findings suggest a decoupling of the 10 Hz frequency into separate functional roles: sensory inhibition in the visual processing regions and spatial attention in the frontoparietal regions.

  5. The case of the missing visual details: Occlusion and long-term visual memory.

    Science.gov (United States)

    Williams, Carrick C; Burkle, Kyle A

    2017-10-01

    To investigate the critical information in long-term visual memory representations of objects, we used occlusion to emphasize 1 type of information or another. By occluding 1 solid side of the object (e.g., top 50%) or by occluding 50% of the object with stripes (like a picket fence), we emphasized visible information about the object, processing the visible details in the former and the object's overall form in the latter. On a token discrimination test, surprisingly, memory for solid or stripe occluded objects at either encoding (Experiment 1) or test (Experiment 2) was the same. In contrast, when occluded objects matched at encoding and test (Experiment 3) or when the occlusion shifted, revealing the entire object piecemeal (Experiment 4), memory was better for solid compared with stripe occluded objects, indicating that objects are represented differently in long-term visual memory. Critically, we also found that when the task emphasized remembering exactly what was shown, memory performance in the more detailed solid occlusion condition exceeded that in the stripe condition (Experiment 5). However, when the task emphasized the whole object form, memory was better in the stripe condition (Experiment 6) than in the solid condition. We argue that long-term visual memory can represent objects flexibly, and task demands can interact with visual information, allowing the viewer to cope with changing real-world visual environments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Functional magnetic resonance imaging by visual stimulation

    International Nuclear Information System (INIS)

    Nishimura, Yukiko; Negoro, Kiyoshi; Morimatsu, Mitsunori; Hashida, Masahiro

    1996-01-01

    We evaluated functional magnetic resonance images obtained in 8 healthy subjects in response to visual stimulation using a conventional clinical magnetic resonance imaging system with multi-slice spin-echo echo planar imaging. Activation in the visual cortex was clearly demonstrated by the multi-slice experiment with a task-related change in signal intensity. In addition to the primary visual cortex, other areas were also activated by a complicated visual task. Multi-slice spin-echo echo planar imaging offers high temporal resolution and allows the three-dimensional analysis of brain function. Functional magnetic resonance imaging provides a useful noninvasive method of mapping brain function. (author)

  7. Robust visual tracking via multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-06-01

    In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.

  8. Testing the distinctiveness of visual imagery and motor imagery in a reach paradigm.

    Science.gov (United States)

    Gabbard, Carl; Ammar, Diala; Cordova, Alberto

    2009-01-01

    We examined the distinctiveness of motor imagery (MI) and visual imagery (VI) in the context of perceived reachability. The aim was to explore the notion that the two visual modes have distinctive processing properties tied to the two-visual-system hypothesis. The experiment included an interference tactic whereby participants completed two tasks at the same time: a visual or motor-interference task combined with a MI or VI-reaching task. We expected increased error would occur when the imaged task and the interference task were matched (e.g., MI with the motor task), suggesting an association based on the assumption that the two tasks were in competition for space on the same processing pathway. Alternatively, if there were no differences, dissociation could be inferred. Significant increases in the number of errors were found when the modalities for the imaged (both MI and VI) task and the interference task were matched. Therefore, it appears that MI and VI in the context of perceived reachability recruit different processing mechanisms.

  9. Constructing visual representations

    DEFF Research Database (Denmark)

    Huron, Samuel; Jansen, Yvonne; Carpendale, Sheelagh

    2014-01-01

    tangible building blocks. We learned that all participants, most of whom had little experience in visualization authoring, were readily able to create and talk about their own visualizations. Based on our observations, we discuss participants’ actions during the development of their visual representations......The accessibility of infovis authoring tools to a wide audience has been identified as a major research challenge. A key task in the authoring process is the development of visual mappings. While the infovis community has long been deeply interested in finding effective visual mappings......, comparatively little attention has been placed on how people construct visual mappings. In this paper, we present the results of a study designed to shed light on how people transform data into visual representations. We asked people to create, update and explain their own information visualizations using only...

  10. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice

    OpenAIRE

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    Abstract For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward ...

  11. Binocular glaucomatous visual field loss and its impact on visual exploration--a supermarket study.

    Science.gov (United States)

    Sippel, Katrin; Kasneci, Enkelejda; Aehling, Kathrin; Heister, Martin; Rosenstiel, Wolfgang; Schiefer, Ulrich; Papageorgiou, Elena

    2014-01-01

    Advanced glaucomatous visual field loss may critically interfere with quality of life. The purpose of this study was to (i) assess the impact of binocular glaucomatous visual field loss on a supermarket search task as an example of everyday living activities, (ii) to identify factors influencing the performance, and (iii) to investigate the related compensatory mechanisms. Ten patients with binocular glaucoma (GP), and ten healthy-sighted control subjects (GC) were asked to collect twenty different products chosen randomly in two supermarket racks as quickly as possible. The task performance was rated as "passed" or "failed" with regard to the time per correctly collected item. Based on the performance of control subjects, the threshold value for failing the task was defined as μ+3σ (in seconds per correctly collected item). Eye movements were recorded by means of a mobile eye tracker. Eight out of ten patients with glaucoma and all control subjects passed the task. Patients who failed the task needed significantly longer time (111.47 s ±12.12 s) to complete the task than patients who passed (64.45 s ±13.36 s, t-test, p supermarket task. However, a considerable number of patients, who compensate by frequent glancing towards the VFD, showed successful task performance. Therefore, systematic exploration of the VFD area seems to be a "time-effective" compensatory mechanism during the present supermarket task.

  12. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    Science.gov (United States)

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  13. Task relevance modulates the cortical representation of feature conjunctions in the target template.

    Science.gov (United States)

    Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan

    2017-07-03

    Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.

  14. The Impact of Early Visual Deprivation on Spatial Hearing: A Comparison between Totally and Partially Visually Deprived Children

    Science.gov (United States)

    Cappagli, Giulia; Finocchietti, Sara; Cocchi, Elena; Gori, Monica

    2017-01-01

    The specific role of early visual deprivation on spatial hearing is still unclear, mainly due to the difficulty of comparing similar spatial skills at different ages and to the difficulty in recruiting young blind children from birth. In this study, the effects of early visual deprivation on the development of auditory spatial localization have been assessed in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR), with the main aim to understand if visual experience is fundamental to the development of specific spatial skills. Our study led to three main findings: firstly, totally blind children performed overall more poorly compared sighted and low vision children in all the spatial tasks performed; secondly, low vision children performed equally or better than sighted children in the same auditory spatial tasks; thirdly, higher residual levels of visual acuity are positively correlated with better spatial performance in the dynamic condition of the auditory localization task indicating that the more residual vision the better spatial performance. These results suggest that early visual experience has an important role in the development of spatial cognition, even when the visual input during the critical period of visual calibration is partially degraded like in the case of low vision children. Overall these results shed light on the importance of early assessment of spatial impairments in visually impaired children and early intervention to prevent the risk of isolation and social exclusion. PMID:28443040

  15. Visual dominance in olfactory memory.

    Science.gov (United States)

    Batic, N; Gabassi, P G

    1987-08-01

    The object of the present study was to verify the emergence of a 'visual dominance' effect in memory tests involving different sensory modes (sight and smell), brought about the preattentive mechanisms which select the visual sensory mode regardless of the recall task.

  16. The representational dynamics of task and object processing in humans

    Science.gov (United States)

    Bankson, Brett B; Harel, Assaf

    2018-01-01

    Despite the importance of an observer’s goals in determining how a visual object is categorized, surprisingly little is known about how humans process the task context in which objects occur and how it may interact with the processing of objects. Using magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI) and multivariate techniques, we studied the spatial and temporal dynamics of task and object processing. Our results reveal a sequence of separate but overlapping task-related processes spread across frontoparietal and occipitotemporal cortex. Task exhibited late effects on object processing by selectively enhancing task-relevant object features, with limited impact on the overall pattern of object representations. Combining MEG and fMRI data, we reveal a parallel rise in task-related signals throughout the cerebral cortex, with an increasing dominance of task over object representations from early to higher visual areas. Collectively, our results reveal the complex dynamics underlying task and object representations throughout human cortex. PMID:29384473

  17. A Ventral Visual Stream Reading Center Independent of Sensory Modality and Visual Experience

    Directory of Open Access Journals (Sweden)

    Lior Reich

    2011-10-01

    Full Text Available The Visual Word Form Area (VWFA is a ventral-temporal-visual area that develops expertise for visual reading. It encodes letter-strings irrespective of case, font, or location in the visual-field, with striking anatomical reproducibility across individuals. In the blind, reading can be achieved using Braille, with a comparable level-of-expertise to that of sighted readers. We investigated which area plays the role of the VWFA in the blind. One would expect it to be at either parietal or bilateral occipital cortex, reflecting the tactile nature of the task and crossmodal plasticity, respectively. However, according to the notion that brain areas are task specific rather than sensory-modality specific, we predicted recruitment of the left-hemispheric VWFA, identically to the sighted and independent of visual experience. Using fMRI we showed that activation during Braille reading in congenitally blind individuals peaked in the VWFA, with striking anatomical consistency within and between blind and sighted. The VWFA was reading-selective when contrasted to high-level language and low-level sensory controls. Further preliminary results show that the VWFA is selectively activated also when people learn to read in a new language or using a different modality. Thus, the VWFA is a mutlisensory area specialized for reading regardless of visual experience.

  18. Brain deactivation in the outperformance in bimodal tasks: an FMRI study.

    Directory of Open Access Journals (Sweden)

    Tzu-Ching Chiang

    Full Text Available While it is known that some individuals can effectively perform two tasks simultaneously, other individuals cannot. How the brain deals with performing simultaneous tasks remains unclear. In the present study, we aimed to assess which brain areas corresponded to various phenomena in task performance. Nineteen subjects were requested to sequentially perform three blocks of tasks, including two unimodal tasks and one bimodal task. The unimodal tasks measured either visual feature binding or auditory pitch comparison, while the bimodal task required performance of the two tasks simultaneously. The functional magnetic resonance imaging (fMRI results are compatible with previous studies showing that distinct brain areas, such as the visual cortices, frontal eye field (FEF, lateral parietal lobe (BA7, and medial and inferior frontal lobe, are involved in processing of visual unimodal tasks. In addition, the temporal lobes and Brodmann area 43 (BA43 were involved in processing of auditory unimodal tasks. These results lend support to concepts of modality-specific attention. Compared to the unimodal tasks, bimodal tasks required activation of additional brain areas. Furthermore, while deactivated brain areas were related to good performance in the bimodal task, these areas were not deactivated where the subject performed well in only one of the two simultaneous tasks. These results indicate that efficient information processing does not require some brain areas to be overly active; rather, the specific brain areas need to be relatively deactivated to remain alert and perform well on two tasks simultaneously. Meanwhile, it can also offer a neural basis for biofeedback in training courses, such as courses in how to perform multiple tasks simultaneously.

  19. Visual cues for data mining

    Science.gov (United States)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  20. Age-related declines of stability in visual perceptual learning.

    Science.gov (United States)

    Chang, Li-Hung; Shibata, Kazuhisa; Andersen, George J; Sasaki, Yuka; Watanabe, Takeo

    2014-12-15

    One of the biggest questions in learning is how a system can resolve the plasticity and stability dilemma. Specifically, the learning system needs to have not only a high capability of learning new items (plasticity) but also a high stability to retain important items or processing in the system by preventing unimportant or irrelevant information from being learned. This dilemma should hold true for visual perceptual learning (VPL), which is defined as a long-term increase in performance on a visual task as a result of visual experience. Although it is well known that aging influences learning, the effect of aging on the stability and plasticity of the visual system is unclear. To address the question, we asked older and younger adults to perform a task while a task-irrelevant feature was merely exposed. We found that older individuals learned the task-irrelevant features that younger individuals did not learn, both the features that were sufficiently strong for younger individuals to suppress and the features that were too weak for younger individuals to learn. At the same time, there was no plasticity reduction in older individuals within the task tested. These results suggest that the older visual system is less stable to unimportant information than the younger visual system. A learning problem with older individuals may be due to a decrease in stability rather than a decrease in plasticity, at least in VPL. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  2. Effects of regular aerobic exercise on visual perceptual learning.

    Science.gov (United States)

    Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas

    2017-12-02

    This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Visual Storytelling

    OpenAIRE

    Ting-Hao; Huang; Ferraro, Francis; Mostafazadeh, Nasrin; Misra, Ishan; Agrawal, Aishwarya; Devlin, Jacob; Girshick, Ross; He, Xiaodong; Kohli, Pushmeet; Batra, Dhruv; Zitnick, C. Lawrence; Parikh, Devi; Vanderwende, Lucy; Galley, Michel

    2016-01-01

    We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND v.1, includes 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. We establish several strong baselines for the storytelling task, and motivate an automatic metric to benchmark progress. Modelling concrete description as well as figurative and social language, as prov...

  4. Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.

    Science.gov (United States)

    Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas

    2017-01-01

    In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.

  5. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    Science.gov (United States)

    Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A

    2016-01-01

    Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  6. Short-term memory for auditory and visual durations: evidence for selective interference effects.

    Science.gov (United States)

    Rattat, Anne-Claire; Picard, Delphine

    2012-01-01

    The present study sought to determine the format in which visual, auditory and auditory-visual durations ranging from 400 to 600 ms are encoded and maintained in short-term memory, using suppression conditions. Participants compared two stimulus durations separated by an interval of 8 s. During this time, they performed either an articulatory suppression task, a visuospatial tracking task or no specific task at all (control condition). The results showed that the articulatory suppression task decreased recognition performance for auditory durations but not for visual or bimodal ones, whereas the visuospatial task decreased recognition performance for visual durations but not for auditory or bimodal ones. These findings support the modality-specific account of short-term memory for durations.

  7. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    Science.gov (United States)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  8. The Role of Motor Affordances in Visual Working Memory

    Directory of Open Access Journals (Sweden)

    Diane Pecher

    2014-12-01

    Full Text Available Motor affordances are important for object knowledge. Semantic tasks on visual objects often show interactions with motor actions. Prior neuro-imaging studies suggested that motor affordances also play a role in visual working memory for objects. When participants remembered manipulable objects (e.g., hammer greater premotor cortex activation was observed than when they remembered non-manipulable objects (e.g., polar bear. In the present study participants held object pictures in working memory while performing concurrent tasks such as articulation of nonsense syllables and performing hand movements. Although concurrent tasks did interfere with working memory performance, in none of the experiments did we find any evidence that concurrent motor tasks affected memory differently for manipulable and non-manipulable objects. I conclude that motor affordances are not used for visual working memory.

  9. Visual memory errors in Parkinson's disease patient with visual hallucinations.

    Science.gov (United States)

    Barnes, J; Boubert, L

    2011-03-01

    The occurrences of visual hallucinations seem to be more prevalent in low light and hallucinators tend to be more prone to false positive type errors in memory tasks. Here we investigated whether the richness of stimuli does indeed affect recognition differently in hallucinating and nonhallucinating participants, and if so whether this difference extends to identifying spatial context. We compared 36 Parkinson's disease (PD) patients with visual hallucinations, 32 Parkinson's patients without hallucinations, and 36 age-matched controls, on a visual memory task where color and black and white pictures were presented at different locations. Participants had to recognize the pictures among distracters along with the location of the stimulus. Findings revealed clear differences in performance between the groups. Both PD groups had impaired recognition compared to the controls, but those with hallucinations were significantly more impaired on black and white than on color stimuli. In addition, the group with hallucinations was significantly impaired compared to the other two groups on spatial memory. We suggest that not only do PD patients have poorer recognition of pictorial stimuli than controls, those who present with visual hallucinations appear to be more heavily reliant on bottom up sensory input and impaired on spatial ability.

  10. A comparison of visual and kinesthetic-tactual displays for compensatory tracking

    Science.gov (United States)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1983-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under certain conditions it can be an effective alternative or supplement to visual displays. In order to understand better how KT tracking compares with visual tracking, both a critical tracking and stationary single-axis tracking tasks were conducted with and without velocity quickening. In the critical tracking task, the visual displays were superior, however, the quickened KT display was approximately equal to the unquickened visual display. In stationary tracking tasks, subjects adopted lag equalization with the quickened KT and visual displays, and mean-squared error scores were approximately equal. With the unquickened displays, subjects adopted lag-lead equalization, and the visual displays were superior. This superiority was partly due to the servomotor lag in the implementation of the KT display and partly due to modality differences.

  11. Visual short-term memory load strengthens selective attention.

    Science.gov (United States)

    Roper, Zachary J J; Vecera, Shaun P

    2014-04-01

    Perceptual load theory accounts for many attentional phenomena; however, its mechanism remains elusive because it invokes underspecified attentional resources. Recent dual-task evidence has revealed that a concurrent visual short-term memory (VSTM) load slows visual search and reduces contrast sensitivity, but it is unknown whether a VSTM load also constricts attention in a canonical perceptual load task. If attentional selection draws upon VSTM resources, then distraction effects-which measure attentional "spill-over"-will be reduced as competition for resources increases. Observers performed a low perceptual load flanker task during the delay period of a VSTM change detection task. We observed a reduction of the flanker effect in the perceptual load task as a function of increasing concurrent VSTM load. These findings were not due to perceptual-level interactions between the physical displays of the two tasks. Our findings suggest that perceptual representations of distractor stimuli compete with the maintenance of visual representations held in memory. We conclude that access to VSTM determines the degree of attentional selectivity; when VSTM is not completely taxed, it is more likely for task-irrelevant items to be consolidated and, consequently, affect responses. The "resources" hypothesized by load theory are at least partly mnemonic in nature, due to the strong correspondence they share with VSTM capacity.

  12. Visual processing in pure alexia

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Habekost, Thomas; Gerlach, Christian

    2010-01-01

    affected. His visual apprehension span was markedly reduced for letters and digits. His reduced visual processing capacity was also evident when reporting letters from words. In an object decision task with fragmented pictures, NN's performance was abnormal. Thus, even in a pure alexic patient with intact...

  13. Shared filtering processes link attentional and visual short-term memory capacity limits.

    Science.gov (United States)

    Bettencourt, Katherine C; Michalka, Samantha W; Somers, David C

    2011-09-30

    Both visual attention and visual short-term memory (VSTM) have been shown to have capacity limits of 4 ± 1 objects, driving the hypothesis that they share a visual processing buffer. However, these capacity limitations also show strong individual differences, making the degree to which these capacities are related unclear. Moreover, other research has suggested a distinction between attention and VSTM buffers. To explore the degree to which capacity limitations reflect the use of a shared visual processing buffer, we compared individual subject's capacities on attentional and VSTM tasks completed in the same testing session. We used a multiple object tracking (MOT) and a VSTM change detection task, with varying levels of distractors, to measure capacity. Significant correlations in capacity were not observed between the MOT and VSTM tasks when distractor filtering demands differed between the tasks. Instead, significant correlations were seen when the tasks shared spatial filtering demands. Moreover, these filtering demands impacted capacity similarly in both attention and VSTM tasks. These observations fail to support the view that visual attention and VSTM capacity limits result from a shared buffer but instead highlight the role of the resource demands of underlying processes in limiting capacity.

  14. Differential Age Effects on Spatial and Visual Working Memory

    Science.gov (United States)

    Oosterman, Joukje M.; Morel, Sascha; Meijer, Lisette; Buvens, Cleo; Kessels, Roy P. C.; Postma, Albert

    2011-01-01

    The present study was intended to compare age effects on visual and spatial working memory by using two versions of the same task that differed only in presentation mode. The working memory task contained both a simultaneous and a sequential presentation mode condition, reflecting, respectively, visual and spatial working memory processes. Young…

  15. UpSet: Visualization of Intersecting Sets

    Science.gov (United States)

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  16. Matching cue size and task properties in exogenous attention.

    Science.gov (United States)

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  17. The Effects of Visual Discriminability and Rotation Angle on 30-Month-Olds' Search Performance in Spatial Rotation Tasks.

    Science.gov (United States)

    Ebersbach, Mirjam; Nawroth, Christian

    2016-01-01

    Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds' success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children ( N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.

  18. The effects of visual discriminability and rotation angle on 30-month-olds’ search performance in spatial rotation tasks

    Directory of Open Access Journals (Sweden)

    Mirjam Ebersbach

    2016-10-01

    Full Text Available Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29 performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.

  19. The Effects of Visual Discriminability and Rotation Angle on 30-Month-Olds’ Search Performance in Spatial Rotation Tasks

    Science.gov (United States)

    Ebersbach, Mirjam; Nawroth, Christian

    2016-01-01

    Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children. PMID:27812346

  20. Differential effects of parietal and frontal inactivations on reaction times distributions in a visual search task

    Directory of Open Access Journals (Sweden)

    Claire eWardak

    2012-06-01

    Full Text Available The posterior parietal cortex participates to numerous cognitive functions, from perceptual to attentional and decisional processes. However, the same functions have also been attributed to the frontal cortex. We previously conducted a series of reversible inactivations of the lateral intraparietal area (LIP and of the frontal eye field (FEF in the monkey which showed impairments in covert visual search performance, characterized mainly by an increase in the mean reaction time (RT necessary to detect a contralesional target. Only subtle differences were observed between the inactivation effects in both areas. In particular, the magnitude of the deficit was dependant of search task difficulty for LIP, but not for FEF.In the present study, we re-examine these data in order to try to dissociate the specific involvement of these two regions, by considering the entire RT distribution instead of mean RT. We use the LATER model to help us interpret the effects of the inactivations with regard to information accumulation rate and decision processes. We show that: 1 different search strategies can be used by monkeys to perform visual search, either by processing the visual scene in parallel, or by combining parallel and serial processes; 2 LIP and FEF inactivations have very different effects on the RT distributions in the two monkeys. Although our results are not conclusive with regards to the exact functional mechanisms affected by the inactivations, the effects we observe on RT distributions could be accounted by an involvement of LIP in saliency representation or decision-making, and an involvement of FEF in attentional shifts and perception. Finally, we observe that the use of the LATER model is limited in the context of a visual search as it cannot fit all the behavioural strategies encountered. We propose that the diversity in search strategies observed in our monkeys also exists in individual human subjects and should be considered in future

  1. The effect of non-visual working memory load on top-down modulation of visual processing.

    Science.gov (United States)

    Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark

    2009-06-01

    While a core function of the working memory (WM) system is the active maintenance of behaviorally relevant sensory representations, it is also critical that distracting stimuli are appropriately ignored. We used functional magnetic resonance imaging to examine the role of domain-general WM resources in the top-down attentional modulation of task-relevant and irrelevant visual representations. In our dual-task paradigm, each trial began with the auditory presentation of six random (high load) or sequentially ordered (low load) digits. Next, two relevant visual stimuli (e.g., faces), presented amongst two temporally interspersed visual distractors (e.g., scenes), were to be encoded and maintained across a 7-s delay interval, after which memory for the relevant images and digits was probed. When taxed by high load digit maintenance, participants exhibited impaired performance on the visual WM task and a selective failure to attenuate the neural processing of task-irrelevant scene stimuli. The over-processing of distractor scenes under high load was indexed by elevated encoding activity in a scene-selective region-of-interest relative to low load and passive viewing control conditions, as well as by improved long-term recognition memory for these items. In contrast, the load manipulation did not affect participants' ability to upregulate activity in this region when scenes were task-relevant. These results highlight the critical role of domain-general WM resources in the goal-directed regulation of distractor processing. Moreover, the consequences of increased WM load in young adults closely resemble the effects of cognitive aging on distractor filtering [Gazzaley, A., Cooney, J. W., Rissman, J., & D'Esposito, M. (2005). Top-down suppression deficit underlies working memory impairment in normal aging. Nature Neuroscience 8, 1298-1300], suggesting the possibility of a common underlying mechanism.

  2. Neural mechanisms underlying temporal modulation of visual perception

    NARCIS (Netherlands)

    Jong, M.C. de

    2015-01-01

    However confident we feel about the way we perceive the visual world around us, there is not a one-to-one relation between visual stimulation and visual perception. Our eyes register reflections of the visual environment and our brain has the difficult task of constructing ‘reality’ from this

  3. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    Science.gov (United States)

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  4. Evaluation of kinesthetic-tactual displays using a critical tracking task

    Science.gov (United States)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.; Ault, R. T.

    1977-01-01

    The study sought to investigate the feasibility of applying the critical tracking task paradigm to the evaluation of kinesthetic-tactual displays. Four subjects attempted to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. Display aiding was introduced in both modalities in the form of velocity quickening. Visual tracking performance was better than tactual tracking, and velocity aiding improved the critical tracking scores for visual and tactual tracking about equally. The results suggest that the critical task methodology holds considerable promise for evaluating kinesthetic-tactual displays.

  5. Optimization of Visual Information Presentation for Visual Prosthesis

    Directory of Open Access Journals (Sweden)

    Fei Guo

    2018-01-01

    Full Text Available Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.

  6. Optimization of Visual Information Presentation for Visual Prosthesis

    Science.gov (United States)

    Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769

  7. Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas.

    Science.gov (United States)

    Dijkstra, Nadine; Bosch, Sander E; van Gerven, Marcel A J

    2017-02-01

    Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest. Visual imagery is the ability to visualize objects that are not in our direct line of sight: something that is important for memory, spatial reasoning, and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study, we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is. Copyright © 2017 the authors 0270-6474/17/371367-07$15.00/0.

  8. Visual Creativity across Cultures: A Comparison between Italians and Japanese

    Science.gov (United States)

    Palmiero, Massimiliano; Nakatani, Chie; van Leeuwen, Cees

    2017-01-01

    Culture-related differences in visual creativity were investigated, comparing Italian and Japanese participants in terms of divergent (figural completion task) and product-oriented thinking (figural combination task). Visual restructuring ability was measured as the ability to reinterpret ambiguous figures and was included as a covariate. Results…

  9. Visual attention modulates brain activation to angry voices.

    Science.gov (United States)

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  10. Towards the quantitative evaluation of visual attention models.

    Science.gov (United States)

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    Directory of Open Access Journals (Sweden)

    Matthew A Gannon

    Full Text Available Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load and visual angle (1.0° or 2.5°. Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  12. Egocentric and allocentric alignment tasks are affected by otolith input.

    Science.gov (United States)

    Tarnutzer, Alexander A; Bockisch, Christopher J; Olasagasti, Itsaso; Straumann, Dominik

    2012-06-01

    Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-dependent paradigms is unknown. In nine subjects we compared precision and accuracy of gravicentric and egocentric alignments in various roll positions (upright, 45°, and 75° right-ear down) using a luminous line (visual paradigm) in darkness. Trial-to-trial variability doubled for both egocentric and gravicentric alignments when roll-tilted. Two mechanisms might explain the roll-angle-dependent modulation in egocentric tasks: 1) Modulating variability in estimated ocular torsion, which reflects the roll-dependent precision of otolith signals, affects the precision of estimating the line orientation relative to the head; this hypothesis predicts that variability modulation is restricted to vision-dependent alignments. 2) Estimated body-longitudinal reflects the roll-dependent variability of perceived earth-vertical. Gravicentric cues are thereby integrated regardless of the task's reference frame. To test the two hypotheses the visual paradigm was repeated using a rod instead (haptic paradigm). As with the visual paradigm, precision significantly decreased with increasing head roll for both tasks. These findings propose that the CNS integrates input coded in a gravicentric frame to solve egocentric tasks. In analogy to gravicentric tasks, where trial-to-trial variability is mainly influenced by the properties of the otolith afferents, egocentric tasks may also integrate

  13. Concrete and abstract visualizations in history learning tasks

    NARCIS (Netherlands)

    Prangsma, M.E.; van Boxtel, C.A.M.; Kanselaar, G.; Kirschner, P.A.

    2009-01-01

    Background: History learning requires that students understand historical phenomena, abstract concepts and the relations between them. Students have problems grasping, using and relating complex historical developments and structures. Aims: A study was conducted to determine the effects of tasks

  14. Food's visually perceived fat content affects discrimination speed in an orthogonal spatial task.

    Science.gov (United States)

    Harrar, Vanessa; Toepel, Ulrike; Murray, Micah M; Spence, Charles

    2011-10-01

    Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.

  15. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing.

    Directory of Open Access Journals (Sweden)

    Rebecca E Paladini

    Full Text Available Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory, may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition, spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants' accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants' performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when

  16. VISMASHUP: streamlining the creation of custom visualization applications

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [Los Alamos National Laboratory; Santos, Emanuele [UNIV OF UTAH; Lins, Lauro [UNIV OF UTAH; Freire, Juliana [UNIV OF UTAH; Silva, Cl' audio T [UNIV OF UTAH

    2010-01-01

    Visualization is essential for understanding the increasing volumes of digital data. However, the process required to create insightful visualizations is involved and time consuming. Although several visualization tools are available, including tools with sophisticated visual interfaces, they are out of reach for users who have little or no knowledge of visualization techniques and/or who do not have programming expertise. In this paper, we propose VISMASHUP, a new framework for streamlining the creation of customized visualization applications. Because these applications can be customized for very specific tasks, they can hide much of the complexity in a visualization specification and make it easier for users to explore visualizations by manipulating a small set of parameters. We describe the framework and how it supports the various tasks a designer needs to carry out to develop an application, from mining and exploring a set of visualization specifications (pipelines), to the creation of simplified views of the pipelines, and the automatic generation of the application and its interface. We also describe the implementation of the system and demonstrate its use in two real application scenarios.

  17. A model for the pilot's use of motion cues in roll-axis tracking tasks

    Science.gov (United States)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  18. Effect of Postural Control Demands on Early Visual Evoked Potentials during a Subjective Visual Vertical Perception Task in Adolescents with Idiopathic Scoliosis.

    Science.gov (United States)

    Chang, Yi-Tzu; Meng, Ling-Fu; Chang, Chun-Ju; Lai, Po-Liang; Lung, Chi-Wen; Chern, Jen-Suh

    2017-01-01

    Subjective visual vertical (SVV) judgment and standing stability were separately investigated among patients with adolescent idiopathic scoliosis (AIS). Although, one study has investigated the central mechanism of stability control in the AIS population, the relationships between SVV, decreased standing stability, and AIS have never been investigated. Through event-related potentials (ERPs), the present study examined the effect of postural control demands (PDs) on AIS central mechanisms related to SVV judgment and standing stability to elucidate the time-serial stability control process. Thirteen AIS subjects (AIS group) and 13 age-matched adolescents (control group) aged 12-18 years were recruited. Each subject had to complete an SVV task (i.e., the modified rod-and-frame [mRAF] test) as a stimulus, with online electroencephalogram recording being performed in the following three standing postures: feet shoulder-width apart standing, feet together standing, and tandem standing. The behavioral performance in terms of postural stability (center of pressure excursion), SVV (accuracy and reaction time), and mRAF-locked ERPs (mean amplitude and peak latency of the P1, N1, and P2 components) was then compared between the AIS and control groups. In the behavioral domain, the results revealed that only the AIS group demonstrated a significantly accelerated SVV reaction time as the PDs increased. In the cerebral domain, significantly larger P2 mean amplitudes were observed during both feet shoulder-width-apart standing and feet together standing postures compared with during tandem standing. No group differences were noted in the cerebral domain. The results indicated that (1) during the dual-task paradigm, a differential behavioral strategy of accelerated SVV reaction time was observed in the AIS group only when the PDs increased and (2) the decrease in P2 mean amplitudes with the increase in the PD levels might be direct evidence of the competition for central

  19. Effect of Postural Control Demands on Early Visual Evoked Potentials during a Subjective Visual Vertical Perception Task in Adolescents with Idiopathic Scoliosis

    Directory of Open Access Journals (Sweden)

    Yi-Tzu Chang

    2017-06-01

    Full Text Available Subjective visual vertical (SVV judgment and standing stability were separately investigated among patients with adolescent idiopathic scoliosis (AIS. Although, one study has investigated the central mechanism of stability control in the AIS population, the relationships between SVV, decreased standing stability, and AIS have never been investigated. Through event-related potentials (ERPs, the present study examined the effect of postural control demands (PDs on AIS central mechanisms related to SVV judgment and standing stability to elucidate the time-serial stability control process. Thirteen AIS subjects (AIS group and 13 age-matched adolescents (control group aged 12–18 years were recruited. Each subject had to complete an SVV task (i.e., the modified rod-and-frame [mRAF] test as a stimulus, with online electroencephalogram recording being performed in the following three standing postures: feet shoulder-width apart standing, feet together standing, and tandem standing. The behavioral performance in terms of postural stability (center of pressure excursion, SVV (accuracy and reaction time, and mRAF-locked ERPs (mean amplitude and peak latency of the P1, N1, and P2 components was then compared between the AIS and control groups. In the behavioral domain, the results revealed that only the AIS group demonstrated a significantly accelerated SVV reaction time as the PDs increased. In the cerebral domain, significantly larger P2 mean amplitudes were observed during both feet shoulder-width-apart standing and feet together standing postures compared with during tandem standing. No group differences were noted in the cerebral domain. The results indicated that (1 during the dual-task paradigm, a differential behavioral strategy of accelerated SVV reaction time was observed in the AIS group only when the PDs increased and (2 the decrease in P2 mean amplitudes with the increase in the PD levels might be direct evidence of the competition for

  20. Visual Recognition Memory across Contexts

    Science.gov (United States)

    Jones, Emily J. H.; Pascalis, Olivier; Eacott, Madeline J.; Herbert, Jane S.

    2011-01-01

    In two experiments, we investigated the development of representational flexibility in visual recognition memory during infancy using the Visual Paired Comparison (VPC) task. In Experiment 1, 6- and 9-month-old infants exhibited recognition when familiarization and test occurred in the same room, but showed no evidence of recognition when…

  1. Age-Related Deficits of Dual-Task Walking: A Review

    Directory of Open Access Journals (Sweden)

    Rainer Beurskens

    2012-01-01

    Full Text Available This review summarizes our present knowledge about elderly people's problems with walking. We highlight the plastic changes in the brain that allow a partial compensation of these age-related deficits and discuss the associated costs and limitations. Experimental evidence for the crucial role of executive functions and working memory is presented, leading us to the hypothesis that it is difficult for seniors to coordinate two streams of visual information, one related to navigation through visually defined space, and the other to a visually demanding second task. This hypothesis predicts that interventions aimed at the efficiency of visuovisual coordination in the elderly will ameliorate their deficits in dual-task walking.

  2. Impaired Visual Motor Coordination in Obese Adults.

    LENUS (Irish Health Repository)

    Gaul, David

    2016-09-01

    Objective. To investigate whether obesity alters the sensory motor integration process and movement outcome during a visual rhythmic coordination task. Methods. 88 participants (44 obese and 44 matched control) sat on a chair equipped with a wrist pendulum oscillating in the sagittal plane. The task was to swing the pendulum in synchrony with a moving visual stimulus displayed on a screen. Results. Obese participants demonstrated significantly (p < 0.01) higher values for continuous relative phase (CRP) indicating poorer level of coordination, increased movement variability (p < 0.05), and a larger amplitude (p < 0.05) than their healthy weight counterparts. Conclusion. These results highlight the existence of visual sensory integration deficiencies for obese participants. The obese group have greater difficulty in synchronizing their movement with a visual stimulus. Considering that visual motor coordination is an essential component of many activities of daily living, any impairment could significantly affect quality of life.

  3. Attention biases visual activity in visual short-term memory.

    Science.gov (United States)

    Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina

    2014-07-01

    In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.

  4. The influence of artificial scotomas on eye movements during visual search

    NARCIS (Netherlands)

    Cornelissen, FW; Bruin, KJ; Kooijman, AC

    Purpose. Fixation durations are normally adapted to the difficulty of the foveal analysis task. We examine to what extent artificial central and peripheral visual field defects interfere with this adaptation process. Methods. Subjects performed a visual search task while their eye movements were

  5. Active listening impairs visual perception and selectivity: an ERP study of auditory dual-task costs on visual attention.

    Science.gov (United States)

    Gherri, Elena; Eimer, Martin

    2011-04-01

    The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual-attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.

  6. Mental workload while driving: effects on visual search, discrimination, and decision making.

    Science.gov (United States)

    Recarte, Miguel A; Nunes, Luis M

    2003-06-01

    The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.

  7. Learning Convolutional Text Representations for Visual Question Answering

    OpenAIRE

    Wang, Zhengyang; Ji, Shuiwang

    2017-01-01

    Visual question answering is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object recognition and image classification, visual question answering raises a different need for textual...

  8. A Strategy for Uncertainty Visualization Design

    Science.gov (United States)

    2009-10-01

    143–156, Magdeburg , Germany . [11] Thomson, J., Hetzler, E., MacEachren, A., Gahegan, M. and Pavel, M. (2005), A Typology for Visualizing Uncertainty...and Stasko [20] to bridge analytic gaps in visualization design, when tasks in the strategy overlap (and therefore complement) design frameworks

  9. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  10. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    Science.gov (United States)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors

  11. The phonological and visual basis of developmental dyslexia in Brazilian Portuguese reading children.

    Directory of Open Access Journals (Sweden)

    Giseli Donadon Germano

    2014-10-01

    Full Text Available Evidence from opaque languages suggests that visual attention processing abilities in addition to phonological skills may act as cognitive underpinnings of developmental dyslexia. We explored the role of these two cognitive abilities on reading fluency in Brazilian Portuguese, a more transparent orthography than French or English. Sixty-six dyslexic and normal Brazilian Portuguese children participated. They were administered three tasks of phonological skills (phoneme identification, phoneme and syllable blending and three visual tasks (a letter global report task and two non-verbal tasks of visual closure and visual constancy. Results show that Brazilian Portuguese dyslexic children are impaired not only in phonological processing but further in visual processing. The phonological and visual processing abilities significantly and independently contribute to reading fluency in the whole population. Last, different cognitively homogeneous subtypes can be identified in the Brazilian Portuguese dyslexic population. Two subsets of dyslexic children were identified as having a single cognitive disorder, phonological or visual; another group exhibited a double deficit and a few children showed no visual or phonological disorder. Thus the current findings extend previous data from more opaque orthographies as French and English, in showing the importance of investigating visual processing skills in addition to phonological skills in dyslexic children whatever their language orthography transparency.

  12. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Visualization analysis and design

    CERN Document Server

    Munzner, Tamara

    2015-01-01

    Visualization Analysis and Design provides a systematic, comprehensive framework for thinking about visualization in terms of principles and design choices. The book features a unified approach encompassing information visualization techniques for abstract data, scientific visualization techniques for spatial data, and visual analytics techniques for interweaving data transformation and analysis with interactive visual exploration. It emphasizes the careful validation of effectiveness and the consideration of function before form. The book breaks down visualization design according to three questions: what data users need to see, why users need to carry out their tasks, and how the visual representations proposed can be constructed and manipulated. It walks readers through the use of space and color to visually encode data in a view, the trade-offs between changing a single view and using multiple linked views, and the ways to reduce the amount of data shown in each view. The book concludes with six case stu...

  14. Using Visualization to Generalize on Quadratic Patterning Tasks

    Science.gov (United States)

    Kirwan, J. Vince

    2017-01-01

    Patterning tasks engage students in a core aspect of algebraic thinking-generalization (Kaput 2008). The National Council of Teachers of Mathematics (NCTM) Algebra Standard states that students in grades 9-12 should "generalize patterns using explicitly defined and recursively defined functions" (NCTM 2000, p. 296). Although educators…

  15. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    Science.gov (United States)

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  16. A ?snapshot? of the visual search behaviours of medical sonographers

    OpenAIRE

    Carrigan, Ann J; Brennan, Patrick C; Pietrzyk, Mariusz; Clarke, Jillian; Chekaluk, Eugene

    2015-01-01

    Abstract Introduction: Visual search is a task that humans perform in everyday life. Whether it involves looking for a pen on a desk or a mass in a mammogram, the cognitive and perceptual processes that underpin these tasks are identical. Radiologists are experts in visual search of medical images and studies on their visual search behaviours have revealed some interesting findings with regard to diagnostic errors. In Australia, within the modality of ultrasound, sonographers perform the diag...

  17. Different effects of executive and visuospatial working memory on visual consciousness.

    Science.gov (United States)

    De Loof, Esther; Poppe, Louise; Cleeremans, Axel; Gevers, Wim; Van Opstal, Filip

    2015-11-01

    Consciousness and working memory are two widely studied cognitive phenomena. Although they have been closely tied on a theoretical and neural level, empirical work that investigates their relation is largely lacking. In this study, the relationship between visual consciousness and different working memory components is investigated by using a dual-task paradigm. More specifically, while participants were performing a visual detection task to measure their visual awareness threshold, they had to concurrently perform either an executive or visuospatial working memory task. We hypothesized that visual consciousness would be hindered depending on the type and the size of the load in working memory. Results showed that maintaining visuospatial content in working memory hinders visual awareness, irrespective of the amount of information maintained. By contrast, the detection threshold was progressively affected under increasing executive load. Interestingly, increasing executive load had a generic effect on detection speed, calling into question whether its obstructing effect is specific to the visual awareness threshold. Together, these results indicate that visual consciousness depends differently on executive and visuospatial working memory.

  18. Factors Associated with Visual Fatigue from Curved Monitor Use: A Prospective Study of Healthy Subjects.

    Directory of Open Access Journals (Sweden)

    Haeng Jin Lee

    Full Text Available To investigate the factors associated with visual fatigue using monitors with various radii of curvature.Twenty normal healthy adults (8 men, 12 women; mean age, 26.2 ± 2.5 years prospectively watched five types of monitors including flat, 4000R, 3000R, 2000R, and 1000R curved monitors for 30 min. An experienced examiner measured the ophthalmological factors including near point of accommodation (NPA, near point of convergence (NPC, refraction, parameters during pupil response at light and saccadic movement just before and after the visual tasks. The questionnaires about subjective ocular symptoms were also investigated just before and after the visual tasks.The NPA increased after the visual tasks with a flat monitor compared with the curved monitors, with the 1000R curved monitor showing the smallest change (p = 0.020. The NPC increased for every monitor after the visual tasks; the largest increase occurred with the flat monitor (p = 0.001. There was no difference in refractive error, pupil response, or saccadic movement in the comparison of before and after the visual tasks. Among the nine factors in the questionnaire, the score of "eye pain" was significantly higher for the flat monitor versus the 1000R curved monitor after the visual tasks (p = 0.034.We identified NPA, NPC, and eye pain as factors associated with visual fatigue. Also, the curvature of the monitor was related to the visual fatigue.

  19. Visualization of hierarchically structured information for human-computer interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Suh Hyun; Lee, J. K.; Choi, I. K.; Kye, S. C.; Lee, N. K. [Dongguk University, Seoul (Korea)

    2001-11-01

    Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchically structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance. In this report, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks. 15 refs., 19 figs., 32 tabs. (Author)

  20. ASSESSMENT OF ATTENTION THRESHOLD IN RATS BY TITRATION OF VISUAL CUE DURATION DURING THE FIVE CHOICE SERIAL REACTION TIME TASK

    Science.gov (United States)

    Martin, Thomas J.; Grigg, Amanda; Kim, Susy A.; Ririe, Douglas G.; Eisenach, James C.

    2014-01-01

    Background The 5 choice serial reaction time task (5CSRTT) is commonly used to assess attention in rodents. We sought to develop a variant of the 5CSRTT that would speed training to objective success criteria, and to test whether this variant could determine attention capability in each subject. New Method Fisher 344 rats were trained to perform a variant of the 5CSRTT in which the duration of visual cue presentation (cue duration) was titrated between trials based upon performance. The cue duration was decreased when the subject made a correct response, or increased with incorrect responses or omissions. Additionally, test day challenges were provided consisting of lengthening the intertrial interval and inclusion of a visual distracting stimulus. Results Rats readily titrated the cue duration to less than 1 sec in 25 training sessions or less (mean ± SEM, 22.9 ± 0.7), and the median cue duration (MCD) was calculated as a measure of attention threshold. Increasing the intertrial interval increased premature responses, decreased the number of trials completed, and increased the MCD. Decreasing the intertrial interval and time allotted for consuming the food reward demonstrated that a minimum of 3.5 sec is required for rats to consume two food pellets and successfully attend to the next trial. Visual distraction in the form of a 3 Hz flashing light increased the MCD and both premature and time out responses. Comparison with existing method The titration variant of the 5CSRTT is a useful method that dynamically measures attention threshold across a wide range of subject performance, and significantly decreases the time required for training. Task challenges produce similar effects in the titration method as reported for the classical procedure. Conclusions The titration 5CSRTT method is an efficient training procedure for assessing attention and can be utilized to assess the limit in performance ability across subjects and various schedule manipulations. PMID

  1. Visual Acuity does not Moderate Effect Sizes of Higher-Level Cognitive Tasks

    Science.gov (United States)

    Houston, James R.; Bennett, Ilana J.; Allen, Philip A.; Madden, David J.

    2016-01-01

    Background Declining visual capacities in older adults have been posited as a driving force behind adult age differences in higher-order cognitive functions (e.g., the “common cause” hypothesis of Lindenberger & Baltes, 1994). McGowan, Patterson and Jordan (2013) also found that a surprisingly large number of published cognitive aging studies failed to include adequate measures of visual acuity. However, a recent meta-analysis of three studies (LaFleur & Salthouse, 2014) failed to find evidence that visual acuity moderated or mediated age differences in higher-level cognitive processes. In order to provide a more extensive test of whether visual acuity moderates age differences in higher-level cognitive processes, we conducted a more extensive meta-analysis of topic. Methods Using results from 456 studies, we calculated effect sizes for the main effect of age across four cognitive domains (attention, executive function, memory, and perception/language) separately for five levels of visual acuity criteria (no criteria, undisclosed criteria, self-reported acuity, 20/80-20/31, and 20/30 or better). Results As expected, age had a significant effect on each cognitive domain. However, these age effects did not further differ as a function of visual acuity criteria. Conclusion The current meta-analytic, cross-sectional results suggest that visual acuity is not significantly related to age group differences in higher-level cognitive performance—thereby replicating LaFleur and Salthouse (2014). Further efforts are needed to determine whether other measures of visual functioning (e.g. contrast sensitivity, luminance) affect age differences in cognitive functioning. PMID:27070044

  2. Numerosity estimates for attended and unattended items in visual search.

    Science.gov (United States)

    Kelley, Troy D; Cassenti, Daniel N; Marusich, Laura R; Ghirardelli, Thomas G

    2017-07-01

    The goal of this research was to examine memories created for the number of items during a visual search task. Participants performed a visual search task for a target defined by a single feature (Experiment 1A), by a conjunction of features (Experiment 1B), or by a specific spatial configuration of features (Experiment 1C). On some trials following the search task, subjects were asked to recall the total number of items in the previous display. In all search types, participants underestimated the total number of items, but the severity of the underestimation varied depending on the efficiency of the search. In three follow-up studies (Experiments 2A, 2B, and 2C) using the same visual stimuli, the participants' only task was to estimate the number of items on each screen. Participants still underestimated the numerosity of the items, although the degree of underestimation was smaller than in the search tasks and did not depend on the type of visual stimuli. In Experiment 3, participants were asked to recall the number of items in a display only once. Subjects still displayed a tendency to underestimate, indicating that the underestimation effects seen in Experiments 1A-1C were not attributable to knowledge of the estimation task. The degree of underestimation depends on the efficiency of the search task, with more severe underestimation in efficient search tasks. This suggests that the lower attentional demands of very efficient searches leads to less encoding of numerosity of the distractor set.

  3. Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.

    Science.gov (United States)

    Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara

    2017-01-01

    Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.

  4. Enhancement and suppression in the visual field under perceptual load.

    Science.gov (United States)

    Parks, Nathan A; Beck, Diane M; Kramer, Arthur F

    2013-01-01

    The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task-greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs) in conjunction with time-domain event-related potentials (ERPs) to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG) was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2, 6, or 11°) during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3 Hz) was attenuated under high perceptual load (relative to low load) at the most proximal (2°) eccentricity but not at more eccentric locations (6 or 11°). Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.

  5. Enhancement and Suppression in the Visual Field under Perceptual Load

    Directory of Open Access Journals (Sweden)

    Nathan A Parks

    2013-05-01

    Full Text Available The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task – greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs in conjunction with time-domain event-related potentials (ERPs to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2°, 6°, or 11° during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3Hz was attenuated under high perceptual load (relative to low load at the most proximal (2° eccentricity but not at more eccentric locations (6˚ or 11˚. Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.

  6. Visual impairments and their influence on road safety.

    NARCIS (Netherlands)

    2015-01-01

    Visual perception is an important source of information when driving a car. Visual impairments of drivers will therefore have an effect on performing the driving task. However, the effects on the crash rate are limited. The reason is, among other things, that people with visual impairments often

  7. Productivity associated with visual status of computer users.

    Science.gov (United States)

    Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W

    2004-01-01

    The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.

  8. Nonuniform Changes in the Distribution of Visual Attention from Visual Complexity and Action: A Driving Simulation Study.

    Science.gov (United States)

    Park, George D; Reed, Catherine L

    2015-02-01

    Researchers acknowledge the interplay between action and attention, but typically consider action as a response to successful attentional selection or the correlation of performance on separate action and attention tasks. We investigated how concurrent action with spatial monitoring affects the distribution of attention across the visual field. We embedded a functional field of view (FFOV) paradigm with concurrent central object recognition and peripheral target localization tasks in a simulated driving environment. Peripheral targets varied across 20-60 deg eccentricity at 11 radial spokes. Three conditions assessed the effects of visual complexity and concurrent action on the size and shape of the FFOV: (1) with no background, (2) with driving background, and (3) with driving background and vehicle steering. The addition of visual complexity slowed task performance and reduced the FFOV size but did not change the baseline shape. In contrast, the addition of steering produced not only shrinkage of the FFOV, but also changes in the FFOV shape. Nonuniform performance decrements occurred in proximal regions used for the central task and for steering, independent of interference from context elements. Multifocal attention models should consider the role of action and account for nonhomogeneities in the distribution of attention. © 2015 SAGE Publications.

  9. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  10. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search.

    Science.gov (United States)

    Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil

    2014-03-01

    Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.

  11. Visual Word Recognition Across the Adult Lifespan

    Science.gov (United States)

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  12. Judgments of auditory-visual affective congruence in adolescents with and without autism: a pilot study of a new task using fMRI.

    Science.gov (United States)

    Loveland, Katherine A; Steinberg, Joel L; Pearson, Deborah A; Mansour, Rosleen; Reddoch, Stacy

    2008-10-01

    One of the most widely reported developmental deficits associated with autism is difficulty perceiving and expressing emotion appropriately. Brain activation associated with performance on a new task, the Emotional Congruence Task, requires judging affective congruence of facial expression and voice, compared with their sex congruence. Participants in this pilot study were adolescents with normal IQ (n = 5) and autism or without (n = 4) autism. In the emotional congruence condition, as compared to the sex congruence of voice and face, controls had significantly more activation than the Autism group in the orbitofrontal cortex, the superior temporal, parahippocampal, and posterior cingulate gyri and occipital regions. Unlike controls, the Autism group did not have significantly greater prefrontal activation during the emotional congruence condition, but did during the sex congruence condition. Results indicate the Emotional Congruence Task can be used successfully to assess brain activation and behavior associated with integration of auditory and visual information for emotion. While the numbers in the groups are small, the results suggest that brain activity while performing the Emotional Congruence Task differed between adolescents with and without autism in fronto-limbic areas and in the superior temporal region. These findings must be confirmed using larger samples of participants.

  13. Haptic sensitivity in needle insertion: the effects of training and visual aid

    Directory of Open Access Journals (Sweden)

    Dumas Cedric

    2011-12-01

    Full Text Available This paper describes an experiment conducted to measure haptic sensitivity and the effects of haptic training with and without visual aid. The protocol for haptic training consisted of a needle insertion task using dual-layer silicon samples. A visual aid was provided as a multimodal cue for the haptic perception task. Results showed that for a group of novices (subjects with no previous experience in needle insertion, training with a visual aid resulted in a longer time to task completion, and a greater applied force, during post-training tests. This suggests that haptic perception is easily overshadowed, and may be completely replaced, by visual feedback. Therefore, haptic skills must be trained differently from visuomotor skills.

  14. The association of visual memory with hippocampal volume.

    Science.gov (United States)

    Zammit, Andrea R; Ezzati, Ali; Katz, Mindy J; Zimmerman, Molly E; Lipton, Michael L; Sliwinski, Martin J; Lipton, Richard B

    2017-01-01

    In this study we investigated the role of hippocampal volume (HV) in visual memory. Participants were a subsample of older adults (> = 70 years) from the Einstein Aging Study. Visual performance was measured using the Complex Figure (CF) copy and delayed recall tasks from the Repeatable Battery for the Assessment of Neuropsychological Status. Linear regressions were fitted to study associations between HV and visual tasks. Participants' (n = 113, mean age = 78.9 years) average scores on the CF copy and delayed recall were 17.4 and 11.6, respectively. CF delayed recall was associated with total (β = .031, p = 0.001) and left (β = 0.031, p = 0.001) and right HVs (β = 0.24, p = 0.012). CF delayed recall remained significantly associated with left HV even after we also included right HV (β = 0.27, p = 0.025) and the CF copy task (β = 0.30, p = 0.009) in the model. CF copy did not show any significant associations with HV. Our results suggest that left HV contributes in retrieval of visual memory in older adults.

  15. The association of visual memory with hippocampal volume.

    Directory of Open Access Journals (Sweden)

    Andrea R Zammit

    Full Text Available In this study we investigated the role of hippocampal volume (HV in visual memory.Participants were a subsample of older adults (> = 70 years from the Einstein Aging Study. Visual performance was measured using the Complex Figure (CF copy and delayed recall tasks from the Repeatable Battery for the Assessment of Neuropsychological Status. Linear regressions were fitted to study associations between HV and visual tasks.Participants' (n = 113, mean age = 78.9 years average scores on the CF copy and delayed recall were 17.4 and 11.6, respectively. CF delayed recall was associated with total (β = .031, p = 0.001 and left (β = 0.031, p = 0.001 and right HVs (β = 0.24, p = 0.012. CF delayed recall remained significantly associated with left HV even after we also included right HV (β = 0.27, p = 0.025 and the CF copy task (β = 0.30, p = 0.009 in the model. CF copy did not show any significant associations with HV.Our results suggest that left HV contributes in retrieval of visual memory in older adults.

  16. Mirror Visual Feedback Training Improves Intermanual Transfer in a Sport-Specific Task: A Comparison between Different Skill Levels

    Directory of Open Access Journals (Sweden)

    Fabian Steinberg

    2016-01-01

    Full Text Available Mirror training therapy is a promising tool to initiate neural plasticity and facilitate the recovery process of motor skills after diseases such as stroke or hemiparesis by improving the intermanual transfer of fine motor skills in healthy people as well as in patients. This study evaluated whether these augmented performance improvements by mirror visual feedback (MVF could be used for learning a sport-specific skill and if the effects are modulated by skill level. A sample of 39 young, healthy, and experienced basketball and handball players and 41 novices performed a stationary basketball dribble task at a mirror box in a standing position and received either MVF or direct feedback. After four training days using only the right hand, performance of both hands improved from pre- to posttest measurements. Only the left hand (untrained performance of the experienced participants receiving MVF was more pronounced than for the control group. This indicates that intermanual motor transfer can be improved by MVF in a sport-specific task. However, this effect cannot be generalized to motor learning per se since it is modulated by individuals’ skill level, a factor that might be considered in mirror therapy research.

  17. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma

    Directory of Open Access Journals (Sweden)

    Enkelejda Kasneci

    2017-01-01

    Full Text Available To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.

  18. Brain activations during bimodal dual tasks depend on the nature and combination of component tasks

    Directory of Open Access Journals (Sweden)

    Emma eSalo

    2015-02-01

    Full Text Available We used functional magnetic resonance imaging to investigate brain activations during nine different dual tasks in which the participants were required to simultaneously attend to concurrent streams of spoken syllables and written letters. They performed a phonological, spatial or simple (speaker-gender or font-shade discrimination task within each modality. We expected to find activations associated specifically with dual tasking especially in the frontal and parietal cortices. However, no brain areas showed systematic dual task enhancements common for all dual tasks. Further analysis revealed that dual tasks including component tasks that were according to Baddeley’s model modality atypical, that is, the auditory spatial task or the visual phonological task, were not associated with enhanced frontal activity. In contrast, for other dual tasks, activity specifically associated with dual tasking was found in the left or bilateral frontal cortices. Enhanced activation in parietal areas, however, appeared not to be specifically associated with dual tasking per se, but rather with intermodal attention switching. We also expected effects of dual tasking in left frontal supramodal phonological processing areas when both component tasks required phonological processing and in right parietal supramodal spatial processing areas when both tasks required spatial processing. However, no such effects were found during these dual tasks compared with their component tasks performed separately. Taken together, the current results indicate that activations during dual tasks depend in a complex manner on specific demands of component tasks.

  19. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    Science.gov (United States)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye

  20. Visual features for perception, attention, and working memory: Toward a three-factor framework.

    Science.gov (United States)

    Huang, Liqiang

    2015-12-01

    Visual features are the general building blocks for attention, perception, and working memory. Here, I explore the factors which can quantitatively predict all the differences they make in various paradigms. I tried to combine the strengths of experimental and correlational approaches in a novel way by developing an individual-item differences analysis to extract the factors from 16 stimulus types on the basis of their roles in eight tasks. A large sample size (410) ensured that all eight tasks had a reliability (Cronbach's α) of no less than 0.975, allowing the factors to be precisely determined. Three orthogonal factors were identified which correspond respectively to featural strength (i.e., how close a stimulus is to a basic feature), visual strength (i.e., visual quality of the stimulus), and spatial strength (i.e., how well a stimulus can be represented as a spatial structure). Featural strength helped substantially in all the tasks but moderately less so in perceptual discrimination; visual strength helped substantially in low-level tasks but not in high-level tasks; and spatial strength helped change detection but hindered ensemble matching and visual search. Jointly, these three factors explained 96.4% of all the variances of the eight tasks, making it clear that they account for almost everything about the roles of these 16 stimulus types in these eight tasks. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    Science.gov (United States)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence

  2. Longterm visual associations affect attentional guidance

    NARCIS (Netherlands)

    Olivers, C.N.L.

    2011-01-01

    When observers perform a visual search task, they are assumed to adopt an attentional set for what they are looking for. The present experiment investigates the influence of long-term visual memory associations on this attentional set. On each trial, observers were asked to search a display for a

  3. Visual Representation Determines Search Difficulty: Explaining Visual Search Asymmetries

    Directory of Open Access Journals (Sweden)

    Neil eBruce

    2011-07-01

    Full Text Available In visual search experiments there exist a variety of experimental paradigms in which a symmetric set of experimental conditions yields asymmetric corresponding task performance. There are a variety of examples of this that currently lack a satisfactory explanation. In this paper, we demonstrate that distinct classes of asymmetries may be explained by virtue of a few simple conditions that are consistent with current thinking surrounding computational modeling of visual search and coding in the primate brain. This includes a detailed look at the role that stimulus familiarity plays in the determination of search performance. Overall, we demonstrate that all of these asymmetries have a common origin, namely, they are a consequence of the encoding that appears in the visual cortex. The analysis associated with these cases yields insight into the problem of visual search in general and predictions of novel search asymmetries.

  4. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    Science.gov (United States)

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  5. Competition Between Endogenous and Exogenous Orienting of Visual Attention

    Science.gov (United States)

    Berger, Andrea; Henik, Avishai; Rafal, Robert

    2005-01-01

    The relation between reflexive and voluntary orienting of visual attention was investigated with 4 experiments: a simple detection task, a localization task, a saccade toward the target task, and a target identification task in which discrimination difficulty was manipulated. Endogenous and exogenous orienting cues were presented in each trial and…

  6. Discourse with Visual Health Data: Design of Human-Data Interaction

    Directory of Open Access Journals (Sweden)

    Oluwakemi Ola

    2018-03-01

    Full Text Available Previous work has suggested that large repositories of data can revolutionize healthcare activities; however, there remains a disconnection between data collection and its effective usage. The way in which users interact with data strongly impacts their ability to not only complete tasks but also capitalize on the purported benefits of such data. Interactive visualizations can provide a means by which many data-driven tasks can be performed. Recent surveys, however, suggest that many visualizations mostly enable users to perform simple manipulations, thus limiting their ability to complete tasks. Researchers have called for tools that allow for richer discourse with data. Nonetheless, systematic design of human-data interaction for visualization tools is a non-trivial task. It requires taking into consideration a myriad of issues. Creation of visualization tools that incorporate rich human-data discourse would benefit from the use of design frameworks. In this paper, we examine and present a design process that is based on a conceptual human-data interaction framework. We discuss and describe the design of interaction for a visualization tool intended for sensemaking of public health data. We demonstrate the utility of systematic interaction design in two ways. First, we use scenarios to highlight how our design approach supports a rich and meaningful discourse with data. Second, we present results from a study that details how users were able to perform various tasks with health data and learn about global health trends.

  7. Selective visual attention to emotional words: Early parallel frontal and visual activations followed by interactive effects in visual cortex.

    Science.gov (United States)

    Schindler, Sebastian; Kissler, Johanna

    2016-10-01

    Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Science.gov (United States)

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  9. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Directory of Open Access Journals (Sweden)

    Gary Lupyan

    Full Text Available BACKGROUND: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. METHODOLOGY/PRINCIPAL FINDINGS: Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'. A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. CONCLUSIONS/SIGNIFICANCE: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  10. Driver's glance behaviour and secondary tasks; Einfluss von Nebenaufgaben auf das Fahrerblickverhalten

    Energy Technology Data Exchange (ETDEWEB)

    Schweigert, M. [BMW Group Forschung und Technik, Muenchen (Germany); Bubb, H. [TU Muenchen, Garching (Germany). Lehrstuhl fuer Ergonomie

    2003-07-01

    This paper contains a proposal for the evaluation of drivers' glance behavior, focussing on the influence of secondary tasks during driving. In general, an evaluation can only be achieved by regarding the quality of a task completion, which can be calculated by a comparison of a measured, actual value or behavior and a defined target value or behavior. Due to this definition, a target glance behavior is defined by so called continous and situational visual tasks. As opposed to continuous visual tasks, situational visual tasks contain a concrete description for a target glance behavior. A field trial (N=30) showed, that the subjects' glance behavior fulfilled most of the defined visual tasks when driving without a secondary task. Driving with secondary tasks leads to an increasing subjects' reliance on the correct driving of the other road users, shown by decreasing visual monitoring. (orig.) [German] Die vorliegende Untersuchung behandelt die Bewertung des Blickverhaltens von Fahrzeugfuehrern, wobei das Hauptaugenmerk auf dem Einfluss von Zusatzaufgaben liegt, die waehrend der Fahrt zu bearbeiten sind. Eine Bewertung ist immer eng mit dem Begriff der Qualitaet verknuepft, wobei ein Ist-Wert mit einem vorgegebenen Soll-Wert zu vergleichen ist. Nur wenn die Abweichung zwischen Soll und Ist gering ist, ist die Qualitaet hoch und die Bewertung somit positiv. Bei der Definition eines Soll-Blickverhaltens wird hier zwischen kontinuierlichen und situativen visuellen Aufgaben unterschieden. Letztere beinhalten konkrete Forderungen an das Fahrerblickverhalten in bestimmten Situationen, waehrend sich die Vorgabe eines Soll-Werts fuer kontinuierliche Aufgaben einer genauen Quantifizierung weitestgehend entzieht. Im Feldversuch (N=30) konnte gezeigt werden, dass bei den Fahrten ohne Zusatzaufgabenbearbeitung (Referenzbedingung) die definierten visuellen Aufgaben zum Grossteil erfuellt werden. Ist der Fahrer jedoch durch Zusatzaufgaben beansprucht, verlaesst er

  11. Shape representation modulating the effect of motion on visual search performance.

    Science.gov (United States)

    Yang, Lindong; Yu, Ruifeng; Lin, Xuelian; Liu, Na

    2017-11-02

    The effect of motion on visual search has been extensively investigated, but that of uniform linear motion of display on search performance for tasks with different target-distractor shape representations has been rarely explored. The present study conducted three visual search experiments. In Experiments 1 and 2, participants finished two search tasks that differed in target-distractor shape representations under static and dynamic conditions. Two tasks with clear and blurred stimuli were performed in Experiment 3. The experiments revealed that target-distractor shape representation modulated the effect of motion on visual search performance. For tasks with low target-distractor shape similarity, motion negatively affected search performance, which was consistent with previous studies. However, for tasks with high target-distractor shape similarity, if the target differed from distractors in that a gap with a linear contour was added to the target, and the corresponding part of distractors had a curved contour, motion positively influenced search performance. Motion blur contributed to the performance enhancement under dynamic conditions. The findings are useful for understanding the influence of target-distractor shape representation on dynamic visual search performance when display had uniform linear motion.

  12. Action recognition and movement direction discrimination tasks are associated with different adaptation patterns

    Directory of Open Access Journals (Sweden)

    Stephan eDe La Rosa

    2016-02-01

    Full Text Available The ability to discriminate between different actions is essential for action recognition and social interaction. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g. left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g. when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently either categorized the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms.

  13. Reward associations impact both iconic and visual working memory.

    Science.gov (United States)

    Infanti, Elisa; Hickey, Clayton; Turatto, Massimo

    2015-02-01

    Reward plays a fundamental role in human behavior. A growing number of studies have shown that stimuli associated with reward become salient and attract attention. The aim of the present study was to extend these results into the investigation of iconic memory and visual working memory. In two experiments we asked participants to perform a visual-search task where different colors of the target stimuli were paired with high or low reward. We then tested whether the pre-established feature-reward association affected performance on a subsequent visual memory task, in which no reward was provided. In this test phase participants viewed arrays of 8 objects, one of which had unique color that could match the color associated with reward during the previous visual-search task. A probe appeared at varying intervals after stimulus offset to identify the to-be-reported item. Our results suggest that reward biases the encoding of visual information such that items characterized by a reward-associated feature interfere with mnemonic representations of other items in the test display. These results extend current knowledge regarding the influence of reward on early cognitive processes, suggesting that feature-reward associations automatically interact with the encoding and storage of visual information, both in iconic memory and visual working memory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The phonological and visual basis of developmental dyslexia in Brazilian Portuguese reading children

    Science.gov (United States)

    Germano, Giseli D.; Reilhac, Caroline; Capellini, Simone A.; Valdois, Sylviane

    2014-01-01

    Evidence from opaque languages suggests that visual attention processing abilities in addition to phonological skills may act as cognitive underpinnings of developmental dyslexia. We explored the role of these two cognitive abilities on reading fluency in Brazilian Portuguese, a more transparent orthography than French or English. Sixty-six children with developmental dyslexia and normal Brazilian Portuguese children participated. They were administered three tasks of phonological skills (phoneme identification, phoneme, and syllable blending) and three visual tasks (a letter global report task and two non-verbal tasks of visual closure and visual constancy). Results show that Brazilian Portuguese children with developmental dyslexia are impaired not only in phonological processing but further in visual processing. The phonological and visual processing abilities significantly and independently contribute to reading fluency in the whole population. Last, different cognitively homogeneous subtypes can be identified in the Brazilian Portuguese population of children with developmental dyslexia. Two subsets of children with developmental dyslexia were identified as having a single cognitive disorder, phonological or visual; another group exhibited a double deficit and a few children showed no visual or phonological disorder. Thus the current findings extend previous data from more opaque orthographies as French and English, in showing the importance of investigating visual processing skills in addition to phonological skills in children with developmental dyslexia whatever their language orthography transparency. PMID:25352822

  15. Social Set Visualizer (SoSeVi) II

    DEFF Research Database (Denmark)

    Flesch, Benjamin; Vatrapu, Ravi

    2016-01-01

    This paper reports the second iteration of the Social Set Visualizer (SoSeVi), a set theoretical visual analytics dashboard of big social data. In order to further demonstrate its usefulness in large-scale visual analytics tasks of individual and collective behavior of actors in social networks......, the current iteration of the Social Set Visualizer (SoSeVi) in version II builds on recent advancements in visualizing set intersections. The development of the SoSeVi dashboard involved cutting-edge open source visual analytics libraries (D3.js) and creation of new visualizations such as of actor mobility...

  16. Task-selective memory effects for successfully implemented encoding strategies.

    Science.gov (United States)

    Leshikar, Eric D; Duarte, Audrey; Hertzog, Christopher

    2012-01-01

    Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies--visual imagery and sentence generation--facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study.

  17. The Effects of 10 Hz Transcranial Alternating Current Stimulation on Audiovisual Task Switching

    Directory of Open Access Journals (Sweden)

    Michael S. Clayton

    2018-02-01

    Full Text Available Neural oscillations in the alpha band (7–13 Hz are commonly associated with disengagement of visual attention. However, recent studies have also associated alpha with processes of attentional control and stability. We addressed this issue in previous experiments by delivering transcranial alternating current stimulation at 10 Hz over posterior cortex during visual tasks (alpha tACS. As this stimulation can induce reliable increases in EEG alpha power, and given that performance on each of our visual tasks was negatively associated with alpha power, we assumed that alpha tACS would reliably impair visual performance. However, alpha tACS was instead found to prevent both deteriorations and improvements in visual performance that otherwise occurred during sham & 50 Hz tACS. Alpha tACS therefore appeared to exert a stabilizing effect on visual attention. This hypothesis was tested in the current, pre-registered experiment by delivering alpha tACS during a task that required rapid switching of attention between motion, color, and auditory subtasks. We assumed that, if alpha tACS stabilizes visual attention, this stimulation should make it harder for people to switch between visual tasks, but should have little influence on transitions between auditory and visual subtasks. However, in contrast to this prediction, we observed no evidence of impairments in visuovisual vs. audiovisual switching during alpha vs. control tACS. Instead, we observed a trend-level reduction in visuoauditory switching accuracy during alpha tACS. Post-hoc analyses showed no effects of alpha tACS in response time variability, diffusion model parameters, or on performance of repeat trials. EEG analyses also showed no effects of alpha tACS on endogenous or stimulus-evoked alpha power. We discuss possible explanations for these results, as well as their broader implications for current efforts to study the roles of neural oscillations in cognition using tACS.

  18. Cognitive Task Analysis of the Battalion Level Visualization Process

    National Research Council Canada - National Science Library

    Leedom, Dennis K; McElroy, William; Shadrick, Scott B; Lickteig, Carl; Pokorny, Robet A; Haynes, Jacqueline A; Bell, James

    2007-01-01

    ... position or as a battalion Operations Officer or Executive Officer. Bases on findings from the cognitive task analysis, 11 skill areas were identified as potential focal points for future training development...

  19. The comparison of visual working memory representations with perceptual inputs.

    Science.gov (United States)

    Hyun, Joo-seok; Woodman, Geoffrey F; Vogel, Edward K; Hollingworth, Andrew; Luck, Steven J

    2009-08-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments in which manual reaction times, saccadic reaction times, and event-related potential latencies were examined. However, these experiments also showed that a slow, limited-capacity process must occur before the observer can make a manual change detection response.

  20. Object versus spatial visual mental imagery in patients with schizophrenia

    Science.gov (United States)

    Aleman, André; de Haan, Edward H.F.; Kahn, René S.

    2005-01-01

    Objective Recent research has revealed a larger impairment of object perceptual discrimination than of spatial perceptual discrimination in patients with schizophrenia. It has been suggested that mental imagery may share processing systems with perception. We investigated whether patients with schizophrenia would show greater impairment regarding object imagery than spatial imagery. Methods Forty-four patients with schizophrenia and 20 healthy control subjects were tested on a task of object visual mental imagery and on a task of spatial visual mental imagery. Both tasks included a condition in which no imagery was needed for adequate performance, but which was in other respects identical to the imagery condition. This allowed us to adjust for nonspecific differences in individual performance. Results The results revealed a significant difference between patients and controls on the object imagery task (F1,63 = 11.8, p = 0.001) but not on the spatial imagery task (F1,63 = 0.14, p = 0.71). To test for a differential effect, we conducted a 2 (patients v. controls) х 2 (object task v. spatial task) analysis of variance. The interaction term was statistically significant (F1,62 = 5.2, p = 0.026). Conclusions Our findings suggest a differential dysfunction of systems mediating object and spatial visual mental imagery in schizophrenia. PMID:15644999

  1. Cognitive pitfall! Videogame players are not immune to dual-task costs.

    Science.gov (United States)

    Donohue, Sarah E; James, Brittany; Eslick, Andrea N; Mitroff, Stephen R

    2012-07-01

    With modern technological advances, we often find ourselves dividing our attention between multiple tasks. While this may seem a productive way to live, our attentional capacity is limited, and this yields costs in one or more of the many tasks that we try to do. Some people believe that they are immune to the costs of multitasking and commonly engage in potentially dangerous behavior, such as driving while talking on the phone. But are some groups of individuals indeed immune to dual-task costs? This study examines whether avid action videogame players, who have been shown to have heightened attentional capacities, are particularly adept multitaskers. Participants completed three visually demanding experimental paradigms (a driving videogame, a multiple-object-tracking task, and a visual search), with and without answering unrelated questions via a speakerphone (i.e., with and without a dual-task component). All of the participants, videogame players and nonvideogame players alike, performed worse while engaging in the additional dual task for all three paradigms. This suggests that extensive videogame experience may not offer immunity from dual-task costs.

  2. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    Science.gov (United States)

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. How Chinese Semantics Capability Improves Interpretation in Visual Communication

    Science.gov (United States)

    Cheng, Chu-Yu; Ou, Yang-Kun; Kin, Ching-Lung

    2017-01-01

    A visual representation involves delivering messages through visually communicated images. The study assumed that semantic recognition can affect visual interpretation ability, and the result showed that students graduating from a general high school achieve satisfactory results in semantic recognition and image interpretation tasks than students…

  4. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp

    2011-06-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  5. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Grö ller, Eduard M.

    2011-01-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  6. Temporal discrimination thresholds in adult-onset primary torsion dystonia: an analysis by task type and by dystonia phenotype.

    LENUS (Irish Health Repository)

    Bradley, D

    2012-01-01

    Adult-onset primary torsion dystonia (AOPTD) is an autosomal dominant disorder with markedly reduced penetrance. Sensory abnormalities are present in AOPTD and also in unaffected relatives, possibly indicating non-manifesting gene carriage (acting as an endophenotype). The temporal discrimination threshold (TDT) is the shortest time interval at which two stimuli are detected to be asynchronous. We aimed to compare the sensitivity and specificity of three different TDT tasks (visual, tactile and mixed\\/visual-tactile). We also aimed to examine the sensitivity of TDTs in different AOPTD phenotypes. To examine tasks, we tested TDT in 41 patients and 51 controls using visual (2 lights), tactile (non-painful electrical stimulation) and mixed (1 light, 1 electrical) stimuli. To investigate phenotypes, we examined 71 AOPTD patients (37 cervical dystonia, 14 writer\\'s cramp, 9 blepharospasm, 11 spasmodic dysphonia) and 8 musician\\'s dystonia patients. The upper limit of normal was defined as control mean +2.5 SD. In dystonia patients, the visual task detected abnormalities in 35\\/41 (85%), the tactile task in 35\\/41 (85%) and the mixed task in 26\\/41 (63%); the mixed task was less sensitive than the other two (p = 0.04). Specificity was 100% for the visual and tactile tasks. Abnormal TDTs were found in 36 of 37 (97.3%) cervical dystonia, 12 of 14 (85.7%) writer\\'s cramp, 8 of 9 (88.8%) blepharospasm, 10 of 11 (90.1%) spasmodic dysphonia patients and 5 of 8 (62.5%) musicians. The visual and tactile tasks were found to be more sensitive than the mixed task. Temporal discrimination threshold results were comparable across common adult-onset primary torsion dystonia phenotypes, with lower sensitivity in the musicians.

  7. Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search.

    Science.gov (United States)

    Wahn, Basil; Schwandt, Jessika; Krüger, Matti; Crafa, Daina; Nunnendorf, Vanessa; König, Peter

    2016-06-01

    In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other's gaze, they use this information to adjust to each other's actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other's gaze. Results showed that search partners performed faster when the gaze information was received via a tactile or auditory display in comparison to receiving it via a visual display or receiving no gaze information. Findings demonstrate the effectiveness of tactile and auditory displays for receiving task-relevant information in joint tasks and are applicable to circumstances in which little or no visual information is available or the visual modality is already taxed with a demanding task such as air-traffic control. Practitioner Summary: The present study demonstrates that tactile and auditory displays are effective for receiving information about actions of others in joint tasks. Findings are either applicable to circumstances in which little or no visual information is available or when the visual modality is already taxed with a demanding task.

  8. Flow, affect and visual creativity.

    Science.gov (United States)

    Cseh, Genevieve M; Phillips, Louise H; Pearson, David G

    2015-01-01

    Flow (being in the zone) is purported to have positive consequences in terms of affect and performance; however, there is no empirical evidence about these links in visual creativity. Positive affect often--but inconsistently--facilitates creativity, and both may be linked to experiencing flow. This study aimed to determine relationships between these variables within visual creativity. Participants performed the creative mental synthesis task to simulate the creative process. Affect change (pre- vs. post-task) and flow were measured via questionnaires. The creativity of synthesis drawings was rated objectively and subjectively by judges. Findings empirically demonstrate that flow is related to affect improvement during visual creativity. Affect change was linked to productivity and self-rated creativity, but no other objective or subjective performance measures. Flow was unrelated to all external performance measures but was highly correlated with self-rated creativity; flow may therefore motivate perseverance towards eventual excellence rather than provide direct cognitive enhancement.

  9. Modality-specific effects on crosstalk in task switching: evidence from modality compatibility using bimodal stimulation.

    Science.gov (United States)

    Stephan, Denise Nadine; Koch, Iring

    2016-11-01

    The present study was aimed at examining modality-specific influences in task switching. To this end, participants switched either between modality compatible tasks (auditory-vocal and visual-manual) or incompatible spatial discrimination tasks (auditory-manual and visual-vocal). In addition, auditory and visual stimuli were presented simultaneously (i.e., bimodally) in each trial, so that selective attention was required to process the task-relevant stimulus. The inclusion of bimodal stimuli enabled us to assess congruence effects as a converging measure of increased between-task interference. The tasks followed a pre-instructed sequence of double alternations (AABB), so that no explicit task cues were required. The results show that switching between two modality incompatible tasks increases both switch costs and congruence effects compared to switching between two modality compatible tasks. The finding of increased congruence effects in modality incompatible tasks supports our explanation in terms of ideomotor "backward" linkages between anticipated response effects and the stimuli that called for this response in the first place. According to this generalized ideomotor idea, the modality match between response effects and stimuli would prime selection of a response in the compatible modality. This priming would cause increased difficulties to ignore the competing stimulus and hence increases the congruence effect. Moreover, performance would be hindered when switching between modality incompatible tasks and facilitated when switching between modality compatible tasks.

  10. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    Science.gov (United States)

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-08

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.

  11. Empiric determination of corrected visual acuity standards for train crews.

    Science.gov (United States)

    Schwartz, Steven H; Swanson, William H

    2005-08-01

    Probably the most common visual standard for employment in the transportation industry is best-corrected, high-contrast visual acuity. Because such standards were often established absent empiric linkage to job performance, it is possible that a job applicant or employee who has visual acuity less than the standard may be able to satisfactorily perform the required job activities. For the transportation system that we examined, the train crew is required to inspect visually the length of the train before and during the time it leaves the station. The purpose of the inspection is to determine if an individual is in a hazardous position with respect to the train. In this article, we determine the extent to which high-contrast visual acuity can predict performance on a simulated task. Performance at discriminating hazardous from safe conditions, as depicted in projected photographic slides, was determined as a function of visual acuity. For different levels of visual acuity, which was varied through the use of optical defocus, a subject was required to label scenes as hazardous or safe. Task performance was highly correlated with visual acuity as measured under conditions normally used for vision screenings (high-illumination and high-contrast): as the acuity decreases, performance at discriminating hazardous from safe scenes worsens. This empirically based methodology can be used to establish a corrected high-contrast visual acuity standard for safety-sensitive work in transportation that is linked to the performance of a job-critical task.

  12. Visual short-term memory load reduces retinotopic cortex response to contrast.

    Science.gov (United States)

    Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli

    2012-11-01

    Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.

  13. Visual Network Asymmetry and Default Mode Network Function in ADHD: An fMRI Study

    Directory of Open Access Journals (Sweden)

    T. Sigi eHale

    2014-07-01

    Full Text Available Background: A growing body of research has identified abnormal visual information processing in ADHD. In particular, slow processing speed and increased reliance on visuo-perceptual strategies have become evident. Objective: The current study used recently developed fMRI methods to replicate and further examine abnormal rightward biased visual information processing in ADHD and to further characterize the nature of this effect; we tested its association to several large-scale distributed network systems. Method: We examined fMRI BOLD response during letter and location judgment tasks, and directly assessed visual network asymmetry and its association to large-scale networks using both a voxelwise and an averaged signal approach. Results: Initial within-group analyses revealed a pattern of left lateralized visual cortical activity in controls but right lateralized visual cortical activity in ADHD children. Direct analyses of visual network asymmetry confirmed atypical rightward bias in ADHD children compared to controls. This ADHD characteristic was atypically associated with reduced activation across several extra-visual networks, including the default mode network (DMN. We also found atypical associations between DMN activation and ADHD subjects’ inattentive symptoms and task performance. Conclusion: The current study demonstrated rightward VNA in ADHD during a simple letter discrimination task. This result adds an important novel consideration to the growing literature identifying abnormal visual processing in ADHD. We postulate that this characteristic reflects greater perceptual engagement of task-extraneous content, and that it may be a basic feature of less efficient top-down task-directed control over visual processing. We additionally argue that abnormal DMN function may contribute to this characteristic.

  14. TME (Task Mapping Editor): tool for executing distributed parallel computing. TME user's manual

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Yamagishi, Nobuhiro; Imamura, Toshiyuki

    2000-03-01

    At the Center for Promotion of Computational Science and Engineering, a software environment PPExe has been developed to support scientific computing on a parallel computer cluster (distributed parallel scientific computing). TME (Task Mapping Editor) is one of components of the PPExe and provides a visual programming environment for distributed parallel scientific computing. Users can specify data dependence among tasks (programs) visually as a data flow diagram and map these tasks onto computers interactively through GUI of TME. The specified tasks are processed by other components of PPExe such as Meta-scheduler, RIM (Resource Information Monitor), and EMS (Execution Management System) according to the execution order of these tasks determined by TME. In this report, we describe the usage of TME. (author)

  15. Self-Taught Visually-Guided Pointing for a Humanoid Robot

    National Research Council Canada - National Science Library

    Marjanovic, Matthew; Scassellati, Brian; Williamson, Matthew

    2006-01-01

    .... This task requires systems for learning saccade to visual targets, generating smooth arm trajectories, locating the arm in the visual field, and learning the map between gaze direction and correct...

  16. Diversification of visual media retrieval results using saliency detection

    Science.gov (United States)

    Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.

    2013-03-01

    Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.

  17. Visual word recognition across the adult lifespan.

    Science.gov (United States)

    Cohen-Shikora, Emily R; Balota, David A

    2016-08-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult life span and across a large set of stimuli (N = 1,187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgment). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the word recognition system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly because of sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using 3 different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.

    Science.gov (United States)

    Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias

    2014-08-01

    Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.

  19. Light Video Game Play is Associated with Enhanced Visual Processing of Rapid Serial Visual Presentation Targets.

    Science.gov (United States)

    Howard, Christina J; Wilding, Robert; Guest, Duncan

    2017-02-01

    There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.

  20. Iowa gambling task: Administration effects in older adults

    Directory of Open Access Journals (Sweden)

    Daniela Di Giorgio Schneider

    Full Text Available Abstract The Iowa Gambling Task (IGT assesses decision-making. Objective: The objective of the present study was to investigate whether specific changes in administering the IGT can affect performance of older adults completing the task. Method: Three versions of the IGT were compared regarding the feedback on the amount of money won or lost over the course of the test. The first version (I consisted of a replication of the original version (Bechara et al., 1994, which utilizes a computerized visual aid (green bar that increases or decreases according to the gains or the losses. The second version (II, however, involved a non-computerized visual aid (cards and, in the third version (III the task did not include any visual aid at all. Ninety-seven older adults, divided into three groups, participated in this study. Group I received computerized cues (n=40, group II, non-computerized cues (n=17 and III was submitted to a version without any cues (n=40. Results: The participants without any cues achieved only a borderline performance, whereas for those with non-computerized cues, twice the number of participants showed attraction to risk in relation to those with aversion. The participants of the computerized version were homogeneously spread across the three performance levels (impaired, borderline and unimpaired. Conclusions: Aspects of the complexity of the decision process as well as of the task used are proposed as possible theoretical explanations for the performance variation exhibited.

  1. Examining a supramodal network for conflict processing: a systematic review and novel functional magnetic resonance imaging data for related visual and auditory stroop tasks.

    Science.gov (United States)

    Roberts, Katherine L; Hall, Deborah A

    2008-06-01

    Cognitive control over conflicting information has been studied extensively using tasks such as the color-word Stroop, flanker, and spatial conflict task. Neuroimaging studies typically identify a fronto-parietal network engaged in conflict processing, but numerous additional regions are also reported. Ascribing putative functional roles to these regions is problematic because some may have less to do with conflict processing per se, but could be engaged in specific processes related to the chosen stimulus modality, stimulus feature, or type of conflict task. In addition, some studies contrast activation on incongruent and congruent trials, even though a neutral baseline is needed to separate the effect of inhibition from that of facilitation. In the first part of this article, we report a systematic review of 34 neuroimaging publications, which reveals that conflict-related activity is reliably reported in the anterior cingulate cortex and bilaterally in the lateral prefrontal cortex, the anterior insula, and the parietal lobe. In the second part, we further explore these candidate "conflict" regions through a novel functional magnetic resonance imaging experiment, in which the same group of subjects perform related visual and auditory Stroop tasks. By carefully controlling for the same task (Stroop), the same to-be-ignored stimulus dimension (word meaning), and by separating out inhibitory processes from those of facilitation, we attempt to minimize the potential differences between the two tasks. The results provide converging evidence that the regions identified by the systematic review are reliably engaged in conflict processing. Despite carefully matching the Stroop tasks, some regions of differential activity remained, particularly in the parietal cortex. We discuss some of the task-specific processes which might account for this finding.

  2. Adaptive semantics visualization

    CERN Document Server

    Nazemi, Kawa

    2016-01-01

    This book introduces a novel approach for intelligent visualizations that adapts the different visual variables and data processing to human’s behavior and given tasks. Thereby a number of new algorithms and methods are introduced to satisfy the human need of information and knowledge and enable a usable and attractive way of information acquisition. Each method and algorithm is illustrated in a replicable way to enable the reproduction of the entire “SemaVis” system or parts of it. The introduced evaluation is scientifically well-designed and performed with more than enough participants to validate the benefits of the methods. Beside the introduced new approaches and algorithms, readers may find a sophisticated literature review in Information Visualization and Visual Analytics, Semantics and information extraction, and intelligent and adaptive systems. This book is based on an awarded and distinguished doctoral thesis in computer science.

  3. Task-dependent enhancement of facial expression and identity representations in human cortex.

    Science.gov (United States)

    Dobs, Katharina; Schultz, Johannes; Bülthoff, Isabelle; Gardner, Justin L

    2018-05-15

    What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Task-specific visual cues for improving process model understanding

    NARCIS (Netherlands)

    Petrusel, Razvan; Mendling, Jan; Reijers, Hajo A.

    2016-01-01

    Context Business process models support various stakeholders in managing business processes and designing process-aware information systems. In order to make effective use of these models, they have to be readily understandable. Objective Prior research has emphasized the potential of visual cues to

  5. Generation of oculomotor images during tasks requiring visual recognition of polygons.

    Science.gov (United States)

    Olivier, G; de Mendoza, J L

    2001-06-01

    This paper concerns the contribution of mentally simulated ocular exploration to generation of a visual mental image. In Exp. 1, repeated exploration of the outlines of an irregular decagon allowed an incidental learning of the shape. Analyses showed subjects memorized their ocular movements rather than the polygon. In Exp. 2, exploration of a reversible figure such as a Necker cube varied in opposite directions. Then, both perspective possibilities are presented. The perspective the subjects recognized depended on the way they explored the ambiguous figure. In both experiments, during recognition the subjects recalled a visual mental image of the polygon they compared with the different polygons proposed for recognition. To interpret the data, hypotheses concerning common processes underlying both motor intention of ocular movements and generation of a visual image are suggested.

  6. Visual short-term memory always requires general attention.

    Science.gov (United States)

    Morey, Candice C; Bieler, Malte

    2013-02-01

    The role of attention in visual memory remains controversial; while some evidence has suggested that memory for binding between features demands no more attention than does memory for the same features, other evidence has indicated cognitive costs or mnemonic benefits for explicitly attending to bindings. We attempted to reconcile these findings by examining how memory for binding, for features, and for features during binding is affected by a concurrent attention-demanding task. We demonstrated that performing a concurrent task impairs memory for as few as two visual objects, regardless of whether each object includes one or more features. We argue that this pattern of results reflects an essential role for domain-general attention in visual memory, regardless of the simplicity of the to-be-remembered stimuli. We then discuss the implications of these findings for theories of visual working memory.

  7. End-User Development of Information Visualization

    DEFF Research Database (Denmark)

    Pantazos, Kostas; Lauesen, Søren; Vatrapu, Ravi

    2013-01-01

    such as data manipulation, but no formal training in programming. 18 visualization tools were surveyed from an enduser developer perspective. The results of this survey study show that end-user developers need better tools to create and modify custom visualizations. A closer collaboration between End......This paper investigates End-User Development of Information Visualization. More specifically, we investigated how existing visualization tools allow end-user developers to construct visualizations. End-user developers have some developing or scripting skills to perform relatively advanced tasks......-User Development and Information Visualization researchers could contribute towards the development of better tools to support custom visualizations. In addition, as empirical evaluations of these tools are lacking both research communities should focus more on this aspect. The study serves as a starting point...

  8. Spotting expertise in the eyes: billiards knowledge as revealed by gaze shifts in a dynamic visual prediction task.

    Science.gov (United States)

    Crespi, Sofia; Robino, Carlo; Silva, Ottavia; de'Sperati, Claudio

    2012-10-31

    In sports, as in other activities and knowledge domains, expertise is a highly valuable asset. We assessed whether expertise in billiards is associated with specific patterns of eye movements in a visual prediction task. Professional players and novices were presented a number of simplified billiard shots on a computer screen, previously filmed in a real set, with the last part of the ball trajectory occluded. They had to predict whether or not the ball would have hit the central skittle. Experts performed better than novices, in terms of both accuracy and response time. By analyzing eye movements, we found that during occlusion, experts rarely extrapolated with the gaze the occluded part of the ball trajectory-a behavior that was widely diffused in novices-even when the unseen path was long and with two bounces interposed. Rather, they looked selectively at specific diagnostic points on the cushions along the ball's visible trajectory, in accordance with a formal metrical system used by professional players to calculate the shot coordinates. Thus, the eye movements of expert observers contained a clear signature of billiard expertise and documented empirically a strategy upgrade in visual problem solving from dynamic, analog simulation in imagery to more efficient rule-based, conceptual knowledge.

  9. Non-binding relationship between visual features

    Directory of Open Access Journals (Sweden)

    Dragan eRangelov

    2014-10-01

    Full Text Available The answer as to how visual attributes processed in different brain loci at different speeds are bound together to give us our unitary experience of the visual world remains unknown. In this study we investigated whether bound representations arise, as commonly assumed, through physiological interactions between cells in the visual areas. In a focal attentional task in which correct responses from either bound or unbound representations were possible, participants discriminated the colour or orientation of briefly presented single bars. On the assumption that representations of the two attributes are bound, the accuracy of reporting the colour and orientation should co-vary. By contrast, if the attributes are not mandatorily bound, the accuracy of reporting the two attributes should be independent. The results of our psychophysical studies reported here supported the latter, non-binding, relationship between visual features, suggesting that binding does not necessarily occur even under focal attention. We propose a task-contingent binding mechanism, postulating that binding occurs at late, post-perceptual, stages through the intervention of memory.

  10. Enhanced visual performance in obsessive compulsive personality disorder.

    Science.gov (United States)

    Ansari, Zohreh; Fadardi, Javad Salehi

    2016-12-01

    Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  11. Do the Contents of Visual Working Memory Automatically Influence Attentional Selection During Visual Search?

    OpenAIRE

    Woodman, Geoffrey F.; Luck, Steven J.

    2007-01-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that ...

  12. Phasic alerting increases visual attention capacity in younger but not in older individuals

    DEFF Research Database (Denmark)

    Wiegand, Iris Michaela; Petersen, Anders; Bundesen, Claus

    2017-01-01

    In the present study, we investigated effects of phasic alerting on visual attention in younger and older adults. We modelled parameters of visual attention based on the computational Theory of Visual Attention (TVA) and measured event-related lateralizations (ERLs) in a partial report task, in w...... and attention, which governs the responsiveness to external cues and is critical for general cognitive functioning in aging.......In the present study, we investigated effects of phasic alerting on visual attention in younger and older adults. We modelled parameters of visual attention based on the computational Theory of Visual Attention (TVA) and measured event-related lateralizations (ERLs) in a partial report task...

  13. Neuronal activity in primate prefrontal cortex related to goal-directed behavior during auditory working memory tasks.

    Science.gov (United States)

    Huang, Ying; Brosch, Michael

    2016-06-01

    Prefrontal cortex (PFC) has been documented to play critical roles in goal-directed behaviors, like representing goal-relevant events and working memory (WM). However, neurophysiological evidence for such roles of PFC has been obtained mainly with visual tasks but rarely with auditory tasks. In the present study, we tested roles of PFC in auditory goal-directed behaviors by recording local field potentials in the auditory region of left ventrolateral PFC while a monkey performed auditory WM tasks. The tasks consisted of multiple events and required the monkey to change its mental states to achieve the reward. The events were auditory and visual stimuli, as well as specific actions. Mental states were engaging in the tasks and holding task-relevant information in auditory WM. We found that, although based on recordings from one hemisphere in one monkey only, PFC represented multiple events that were important for achieving reward, including auditory and visual stimuli like turning on and off an LED, as well as bar touch. The responses to auditory events depended on the tasks and on the context of the tasks. This provides support for the idea that neuronal representations in PFC are flexible and can be related to the behavioral meaning of stimuli. We also found that engaging in the tasks and holding information in auditory WM were associated with persistent changes of slow potentials, both of which are essential for auditory goal-directed behaviors. Our study, on a single hemisphere in a single monkey, reveals roles of PFC in auditory goal-directed behaviors similar to those in visual goal-directed behaviors, suggesting that functions of PFC in goal-directed behaviors are probably common across the auditory and visual modality. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Visual attention shifting in autism spectrum disorders.

    Science.gov (United States)

    Richard, Annette E; Lajiness-O'Neill, Renee

    2015-01-01

    Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be

  15. Paying attention to orthography: A visual evoked potential study

    Directory of Open Access Journals (Sweden)

    Anthony Thomas Herdman

    2013-05-01

    Full Text Available In adult readers, letters and words are rapidly identified within visual networks to allow for efficient reading abilities. Neuroimaging studies of orthography have mostly used words and letter strings that recruit many hierarchical levels in reading. Understanding how single letters are processed could provide further insight into orthographic processing. The present study investigated orthographic processing using single letters and pseudoletters when adults were encouraged to pay attention to or away from orthographic features. We measured evoked potentials (EPs to single letters and pseudoletters from adults while they performed an orthographic-discrimination task (letters vs. pseudoletters, a colour-discrimination task (red vs. blue, and a target-detection task (respond to #1 and #2. Larger and later peaking N1 responses (~170ms and larger P2 responses (~250 ms occurred to pseudoletters as compared to letters. This reflected greater visual processing for pseudoletters. Dipole analyses localized this effect to bilateral fusiform and inferior temporal cortices. Moreover, this letter-pseudoletter difference was not modulated by task and thus indicates that directing attention to or away from orthographic features didn’t affect early visual processing of single letters or pseudoletters within extrastriate regions. Paying attention to orthography or colour as compared to disregarding the stimuli (target-detection task elicited selection negativities at about 175 ms, which were followed by a classical N2-P3 complexes. This indicated that the tasks sufficiently drew participant’s attention to and away from the stimuli. Together these findings revealed that visual processing of single letters and pseudoletters, in adults, appeared to be sensory-contingent and independent of paying attention to stimulus features (e.g., orthography or colour.

  16. Comparison of animated jet stream visualizations

    Science.gov (United States)

    Nocke, Thomas; Hoffmann, Peter

    2016-04-01

    The visualization of 3D atmospheric phenomena in space and time is still a challenging problem. In particular, multiple solutions of animated jet stream visualizations have been produced in recent years, which were designed to visually analyze and communicate the jet and related impacts on weather circulation patterns and extreme weather events. This PICO integrates popular and new jet animation solutions and inter-compares them. The applied techniques (e.g. stream lines or line integral convolution) and parametrizations (color mapping, line lengths) are discussed with respect to visualization quality criteria and their suitability for certain visualization tasks (e.g. jet patterns and jet anomaly analysis, communicating its relevance for climate change).

  17. 3D Visual Data Mining: goals and experiences

    DEFF Research Database (Denmark)

    Bøhlen, Michael Hanspeter; Bukauskas, Linas; Eriksen, Poul Svante

    2003-01-01

    , statistical analyses, perceptual and cognitive psychology, and scientific visualization. At the conceptual level we offer perceptual and cognitive insights to guide the information visualization process. We then choose cluster surfaces to exemplify the data mining process, to discuss the tasks involved...

  18. Task-selective memory effects for successfully implemented encoding strategies.

    Directory of Open Access Journals (Sweden)

    Eric D Leshikar

    Full Text Available Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies--visual imagery and sentence generation--facilitate memory through the production of different types of mediators (e.g., mental images and sentences. Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator. It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study.

  19. The spatiotopic 'visual' cortex of the blind

    Science.gov (United States)

    Likova, Lora

    2012-03-01

    Visual cortex activity in the blind has been shown in sensory tasks. Can it be activated in memory tasks? If so, are inherent features of its organization meaningfully employed? Our recent results in short-term blindfolded subjects imply that human primary visual cortex (V1) may operate as a modality-independent 'sketchpad' for working memory (Likova, 2010a). Interestingly, the spread of the V1 activation approximately corresponded to the spatial extent of the images in terms of their angle of projection to the subject. We now raise the questions of whether under long-term visual deprivation V1 is also employed in non-visual memory task, in particular in congenitally blind individuals, who have never had visual stimulation to guide the development of the visual area organization, and whether such spatial organization is still valid for the same paradigm that was used in blindfolded individuals. The outcome has implications for an emerging reconceptualization of the principles of brain architecture and its reorganization under sensory deprivation. Methods: We used a novel fMRI drawing paradigm in congenitally and late-onset blind, compared with sighted and blindfolded subjects in three conditions of 20s duration, separated by 20s rest-intervals, (i) Tactile Exploration: raised-line images explored and memorized; (ii) Tactile Memory Drawing: drawing the explored image from memory; (iii) Scribble: mindless drawing movements with no memory component. Results and Conclusions: V1 was strongly activated for Tactile Memory Drawing and Tactile Exploration in these totally blind subjects. Remarkably, after training, even in the memory task, the mapping of V1 activation largely corresponded to the angular projection of the tactile stimuli relative to the ego-center (i.e., the effective visual angle at the head); beyond this projective boundary, peripheral V1 signals were dramatically reduced or even suppressed. The matching extent of the activation in the congenitally blind

  20. Dissociation of object and spatial visual processing pathways in human extrastriate cortex

    Energy Technology Data Exchange (ETDEWEB)

    Haxby, J.V.; Grady, C.L.; Horwitz, B.; Ungerleider, L.G.; Mishkin, M.; Carson, R.E.; Herscovitch, P.; Schapiro, M.B.; Rapoport, S.I. (National Institutes of Health, Bethesda, MD (USA))

    1991-03-01

    The existence and neuroanatomical locations of separate extrastriate visual pathways for object recognition and spatial localization were investigated in healthy young men. Regional cerebral blood flow was measured by positron emission tomography and bolus injections of H2(15)O, while subjects performed face matching, dot-location matching, or sensorimotor control tasks. Both visual matching tasks activated lateral occipital cortex. Face discrimination alone activated a region of occipitotemporal cortex that was anterior and inferior to the occipital area activated by both tasks. The spatial location task alone activated a region of lateral superior parietal cortex. Perisylvian and anterior temporal cortices were not activated by either task. These results demonstrate the existence of three functionally dissociable regions of human visual extrastriate cortex. The ventral and dorsal locations of the regions specialized for object recognition and spatial localization, respectively, suggest some homology between human and nonhuman primate extrastriate cortex, with displacement in human brain, possibly related to the evolution of phylogenetically newer cortical areas.